Climate Change – Part I – The Basics


Climate change is a global issue that has wedded science to politics while simultaneously transcending the social responsibilities held by both institutions. A polarizing subject in many ways, climate change is considered as one of the most daunting challenges humanity currently faces; at its crux is an initiative towards global communication, and environmental responsibility.

To this day, there remains a schism between the public, and the scientific community when it comes to understanding climate change, and what it essentially means for our world. In a manner that follows the development of various other issues over the course of history, climate change highlights a certain measure of conflict in science, and ignorance.

Investing the time to learn the basics can prove the difference in being knowledgeable and informed or confused and manipulated. This is particularly crucial as climate change is a phenomenon that has wide implications to civilization, and overtly emphasizes the need for humanity to collaborate with each other in tackling the problem.

In this three-part series, we will address various facets of this issue ranging from the basics of the science behind the phenomenon as well as the consequential symptoms  or effects of climate change for the present, and the future. We will conclude by discussing the options that we must consider in our transition to achieve progress.

Let’s begin!

Dissecting Weather and Climate

Le’ts review the difference between weather, and climate. Simply, weather is local, and short-term while climate is long-term, and doesn’t relate to one single location. More precisely, the climate of an area defines the average weather conditions in the given region over a long period of time. The time period being considered generally involves changes taking place over tens of thousands of years. So, whenever we pass by a few winters that aren’t as cold as usual, it does not necessitate a change in climate. Such events are rather anomalies that don’t represent any long-term change.

Moving forward in our discussion, it is also imperative that we don’t underestimate the effects of small changes in climate. To put in perspective, the “Ice Age,” often talked about by scientists involved a world where the Earth’s average temperature was only 5 degrees Celsius cooler than modern day temperature averages. Small changes in climate can equate to major effects around the world. 

Climate Change or Global Warming

We often hear the phrases climate change, and global warming used interchangeably in describing climate transitions but there is a subtle difference. In the early 20th century, scientists used the term climate change when writing about events such as ice ages. But once scientists recognized the specific risks posed by human-produced greenhouse gases on the Earth’s climate, they needed a term to describe it.

Wallace Broecker’s paper in the journal Science, in 1975, entitled “Climate change: Are we on the brink of a pronounced global warming?” introduced the word global warming into the public lexicon.

Soon enough, the phrase global warming gained currency, and the term global change emerged as a way to describe all modes of large-scale impact on the planet, including issues such as the Antarctic ozone hole.

The planet as a whole is warming, but scientists prefer the term global change or global climate change. The reasoning behind this is that global warming can be interpreted as a uniform effect (warming everywhere on Earth), while a few regions may in fact cool slightly even if the planet were to warm up. In fact, it is a popular opinion that climate change sounds less frightening to the ear than global warming; the latter though catches more attention in the public eye. A few scientists, and activists also prefer to use global warming to imply human involvement in the process of describing climate transitions.

So, is the planet really warming up?

The short answer: YES! After laboriously working through a century’s worth of temperature records, various independent teams of scientists have converged on a rise of 0.8 degrees Celsius in the average surface air temperature of Earth when comparing the periods from 2003 – 2012 to 1850 – 1900. While this degree of warming may not sound like a big deal, it does make a big difference when it is in place everyday. Small changes can become amplified into bigger ones. Any warming can serve as a base from which heat waves can become worse. The effects are particularly pronounced in certain locations like the Arctic which has experienced an overall warming. Apart from the numbers, there’s a wealth of environmental evidence to bolster the case in favor of the Earth’s warming up. Without going too much into detail,

(1) Ice on land, and at sea has melted dramatically in many areas outside of interior Antarctica and Greenland.

(2) A lengthening of the growing season around much of the Northern Hemisphere.

(3) The migration of various forms of life, including mosquitoes, birds, and other creatures to higher altitudes, and latitudes due to the increasing warmth. Likewise, the migration of many forms of marine life moving poleward (the shift in ranges is 10 times the average for land-based species).

Other observations from the Intergovernmental Panel on Climate Change (IPCC) highlight the warming trend of the last 50 years being nearly the double of the last 100 years; a vast increase in ocean temperature to greater depths (the oceans absorb 80% of the heat of Earth’s climate system); increasing droughts; increased precipitation in eastern regions of the Americas, and northern regions of Europe, and Asia; drying trends in Africa, and the Mediterranean etc.

How Global Warming Works? 

Global warming is caused by an increase in the greenhouse effect. The greenhouse effect is not bad on its own, and is in fact a natural circumstance of the Earth’s atmosphere. It is also the reason why the Earth is warm enough for life to survive.

The greenhouse effect, in essence, involves a play of energy balance on the Earth’s. When sunlight reaches our planet, 30% of its gets reflected or scattered back to space by clouds, dust, or the Earth’s surface. More than 20% of the sunlight is absorbed in the atmosphere, mainly by clouds, and water vapor. Lastly, almost 50% is absorbed by the Earth’s surface including land, forests, pavement, oceans etc.

Now, all this energy doesn’t stay permanently on the Earth. If it did, the Earth would literally be on fire. In fact, the Earth’s oceans, and land masses re-radiate the heat, some of which makes it into space. Most of it though is absorbed by clouds, and greenhouse gases which in turn radiate the heat back to the surface, and some out to space. Since the heat doesn’t make it out through the Earth’s atmosphere, the planet becomes warmer. It is basically an energy imbalance scenario where there is more energy coming through the atmosphere, than that leaving the Earth.

The two main components of air include nitrogen (78%), and oxygen (20%) gas, both of which aren’t efficient in absorbing radiation from the Earth due to their two-atom structure. On the other hand, other gases with three or more atoms can capture energy far out of their scant presence. These are the greenhouse gases, the ones that keep Earth inhabitable. That’s all well, and good, but the same gases also warm the Earth. The more greenhouse gases we add to the atmosphere, the more our planet warms. The major players involved include: Carbon dioxide, Nitrous oxide,  Methane, and to a lesser extent, Water vapor.

 Greenhouse Gases: What’s Happening? 

The greenhouse effect is driven by naturally occurring substances in the atmosphere. This is predicated by a necessity for balance referring to the radiation cycles of the Earth mentioned earlier. Unfortunately, since the Industrial Revolution, humans have been pouring huge amounts of greenhouse gases into the atmosphere thus tipping the balance toward an amplified warming of the planet.

Carbon dioxide makes up less than 0.04% of the Earth’s atmosphere, most of which is due to early volcanic activity in the planet. Today, we are pumping huge amounts of the gas into the atmosphere as the gas is produced when fossil fuels are burned, as well as when people, and animals breathe, and when plants decompose. Extra carbon dioxide results in more energy absorption, and an overall increase in the Earth’s atmosphere. In fact, the average surface temperature of the Earth has gone from 14.5 degrees Celsius in 1860 to 15.3 degrees Celsius in 1980.

Nitrous oxide is another important green house gas, and while we don’t release great amounts of this gas through human activity, nitrous oxide absorbs much more energy than carbon dioxide. For example, the use of nitrogen fertilizer on crops releases nitrous oxide in great quantities.

Methane is a combustible gas, and the main component of natural gas. It also occurs naturally through organic material decomposition. Other man-made processes that produce methane include: extraction from coal, digestive gases in large livestock, bacteria in rice paddies, and garbage decomposition etc. Like its fellow compatriot greenhouse gases, methane also absorbs infrared energy, and keeps up the heat on Earth.

Apart from their devastating effects, it takes a long time for the planet to naturally recycle these various gases. For example, a typical molecule of carbon dioxide can stay airborne for more than a century. Thus, greenhouse gases have both a potent, and a long-standing impact on the Earth’s ecosystems. A few other gases that make up for the rest of the greenhouse players include the Chlorofluorocarbons (CFCs), water vapor, ozone etc. Water vapor is particularly interesting, as it isn’t a very strong greenhouse gas, but makes up for this in sheer abundance. As global temperatures rise, oceans, and lakes release more water vapor, up to 7% more for every degree Celsius of warming, which adds to the warming cycle.

What’s next? 

In conclusion, the mechanisms involved in climate change, or global warming, are largely positive feedbacks that amplify the warming of the planet: the evaporation of water from the oceans doubles the impact of carbon dioxide increase, and melting sea ice reduces the amount of sunlight reflected to space etc. While not all feedbacks are certain, it is a grounded truth that the planet has to constantly readjust to the changes we make in our environment, in the case of global warming, the consistent addition of greenhouse gases into the atmosphere. So far, I have laid the  basic groundwork for the symptoms we can expect to see, as a consequence of global warming, in our environment. Moving on in Part II, we will consider those changes in greater detail, and what they entail for the future of our planet.

Hypernovas: Explosions in Space

Big Numbers in Time

The average lifespan of a human being is approximately 80 years. This number pales in comparison to the lifespans of celestial bodies such as the stars, luminous spheres of plasma that illuminate the night sky (Figure 1).

Figure 1. Stars burning brightly in the expansive void of a night sky.

The lifespan of a star depends on its mass. Interestingly, it is found that the more massive the star, the faster it fades away. The Sun, for example, is about 4.6 billion years old, and will last another 9 billion years. Stars that are 10 times the mass of the Sun burn only for 100 million years. On the other hand, stars one-tenth the mass of the Sun burn 100 billion years or longer. Those are some large numbers being thrown about, but in astronomy, these orders of magnitude are quite common. To put this in perspective, our ancestors have been around for about six million years, but the modern form of humans, or homo sapiens evolved only about 200,000 years ago. Human civilization is a recent enterprise of 6000 years, with industrialization beginning only in the 1800. Our existence is a fleeting instance compared to the lives of stars, a rich history woven through several evolutionary stages of enormous extravagance, which begs the question, what happens when a star dies?

Colors of an Explosion

Stellar evolution is the process by which a star changes over the course of time. As mentioned earlier, the more massive a star, the faster it burns out. All stars are born from nebulae, clouds of gas and dust, and over the course of millions of years, these proto- or infant stars settle down, and transform into what is known as a main-sequence star. The Sun is a typical main-sequence star (Figure 2).

Figure 2. An artist’s depiction of the evolution of a sun like star from its main sequence phase (far left) to a planetary nebula (far right).

The death of a star is intrinsically related to the energy source that powers a star for most of its life: nuclear fusion. If one were to peel open a star, they would find a ring structure like that inside an onion. Initially, the energy generated by a main-sequence star is through the fusion of hydrogen atoms at its core. Hydrogen atoms fuse to produce Helium resulting in an abundance of the latter, and the depletion of the former fuel. Eventually, the star begins to fuse the Hydrogen fuel along a spherical shell surrounding a mostly Helium core. This process causes the star to grow in size, and evolve into what is known as a Red Giant. Stars with half the mass of the Sun can also generate Helium fusion at their core, while more massive stars fuse heavier elements in a series of concentric shells. In general, the more onion rings, the more massive the star (Figure 3).

Figure 3. Onions have layers, stars have layers.

Once this nuclear fuel has been exhausted, a star like the Sun collapses into a dense, small body known as a white dwarf, with much of the outer layers of the Sun being expelled into a planetary nebula (Figure 4).

We learn about the stars by receiving and interpreting the messages which their light brings to us. The message of the Companion of Sirius when it was decoded ran: “I am composed of material 3,000 times denser than anything you have ever come across; a ton of my material would be a little nugget that you could put in a matchbox.” What reply can one make to such a message? The reply which most of us made in 1914 was—”Shut up. Don’t talk nonsense.” – Sir Arthur Eddington

The word “planetary nebula” is a misnomer. It does not mean clouds of gas and dust consisting of planets. The word originated in the 1780s when astronomer William Herschel viewed these objects through his telescope, naming them so because they resembled the rounded shapes of planets.

Figure 4. The Ring nebula (M57), a diffuse shell of gas and dust ejected from the parent star at the center.

Stars more massive than the Sun can explode in a supernova releasing much of the material that they were composed of in a shock-wave into the vacuum that is space. By releasing the bulk of the chemical elements that they had originally sustained in their core (the list includes: Hydrogen, Helium, Carbon, Neon, Oxygen, Silicon, and Iron), stars enrich the interstellar medium. The resulting shock-wave produced from a supernova also helps trigger the formation of new stars. The cores of such massive stars collapse into an extremely dense neutron star, and in certain cases, a black hole (Figure 5).

A normal-sized matchbox containing neutron-star material would have a mass of approximately 13 million tonnes, or a 2.5 million m3 chunk of the Earth (a cube with edges of about 135 metres).

The decisive factor is always the mass of the star, which in simple terms, is proportional to the strength of its gravity.

Figure 5. Interstellar’s Gargantua, a black hole, the literal heart of darkness.

Gravity is a one-dimensional force, in that it is always attractive, and tries to pull things together. We are held to the surface of the Earth by the planet’s gravitational force. A black hole is born when an object is unable to withstand the compressing force of its own gravity. Stars use their nuclear fusion to maintain a tremulous balance, for several million years, in an exhaustive fight against gravity. The Sun will never become a black hole, as its gravity isn’t sufficient to overpower the force produced by its nuclear furnace. But in more massive stars, gravity ultimately wins.

Then what are hypernovas?

Even Bigger Explosions

Simply put, hypernovas are pretty much the same thing as supernovas, just on a much grander scale. Hypernovas are extremely energetic supernovas, and though their formation is similar, they are both distinct phenomena (Figure 6).

Figure 6. Finally a Hypernova!

In a supernova, a star sheds its outer matter leaving behind a dense core in a neutron star. In a hypernova, the force of the explosion tears the inner star apart as well. Hypernovas only occur in stars with greater than 30 times the mass of the Sun. Like in a supernova, the star runs out of fuel, unable to support itself under the weight of its own gravity. As it collapses, the star subsequently explodes, spewing matter in all directions. The energy released within mere seconds of this explosion is greater than the energy that the Sun will release in its entire lifetime.

Time for an analogy. The sun radiates ~ 3.83 x 1026 W of energy. The standard light bulb for a table lamp has a wattage of 60 W. Thus, the sun radiates energy equivalent of 7 x 1024 light bulbs. Supernovas shine with the brightness of 10 billion suns (1 sun = 7 x 1024 light bulbs, then 10 billion suns = 10 x 109 x 7 x 1024 light bulbs = 7 x 1034 light bulbs), their total energy output being ~ 1044 J, which is the total energy output of the sun in its 10 billion year lifetime. Hypernovas release energy in excess of this amount. That’s a lot of light-bulbs!

Two plausible reasons currently conceived on the formation of a hypernova include:

  • A massive star (rotating at a very high speed or encased in a powerful magnetic field ) exploding, resulting in the inner core being ripped apart.
  • Two stars in a binary system colliding, forming one gigantic mass, and exploding.

The result is ultimately clear: a black hole is produced, and a huge amount of energy is released in the form of a gamma-ray burst, one of the brightest known events in the universe. The light released in a hypernova is several million times greater than all the light of the stars in the Milky Way galaxy put together.

Introducing the Universe

The world is very old, and human beings are very young. Significant events precede our appearance on Earth in what is an awesome vista of time. But in our vanity, we find the stubborn pride which motivates our claims, and actions as a higher organism on this planet. Thus, we are blind to the overwhelming reality that our existence on Earth, and the very existence of the planet itself, is nothing more than a single thread woven into the rich tapestry that is the universe. There, beyond the confines of our world and among the stars, lies unfathomable mysteries of great wonder. Hypernovas are a small shade of that enormous spectrum of amazing phenomena in our universe.


Wikipedia Articles:

Other Articles:

Next Up on Let’s Get Thinking: Hypernovas!

This was a request from one of my readers! It should be a colorful post, so look forward to it, as we take a trip to outer space to peer into a rare, and beautiful phenomenon at the far reaches of our universe!

Why Is Snow So Bright?!

If one wishes to experience the full spectrum of the annual cycle of the four seasons, Edmonton is certainly the place to visit. Though it varies every year, you can expect an early start to spring around March, with summer setting the pace in June, autumn settling in with September, followed closely by winter arriving around October at the earliest. Winter, in fact, is the chief minstrel of Edmonton’s seasonal ballad (Figure 1), with Boreas providing for the brittle winds, and dense snowfall that sweep across the city during this season.

Figure 1. Edmonton’s winter skyline

Who doesn’t like snow? I myself have never denied an opportunity to jump into or wade my way through a dense pools of snow (just make sure you are wearing the appropriate gear for the occasion), or on some occasions push others into them (my partner, Leina, in particular, could relate to a few “sweet” memories). In fact, it was only after arriving in Edmonton, 19 years old to boot, that I first saw snow in my life. This was back in 2009, and now that 2016 has come to an end, I have rounded off seven years to my predominantly snow-filled life in Edmonton, Alberta, Canada. Despite all of this, if there is one thing that I could never get used to in all these years, it would have to be waking up in the early hours of the day to the bright, and mildly annoying  pure, ambient white light emanating from the snow outside my apartment, leading now to the subject of our post, “Why Is Snow So Bright?”

The answer is quite simple. Snow has the highest albedo of any naturally occurring substance on Earth. Albedo is the percentage of reflectance (of light) off the surface of an object. Snow is ~ 90% reflective, which is why it is so damn bright. This begs the question of how a reflective surface may appear brighter than its diffuse illuminant (the sky, in this case). Having done a little bit of back-reading, it is reported,

“Three factors are largely responsible for this visually striking effect: the law of darkening for the cloud cover, the reflectivity of the snow and the average landscape albedo, and the observer’s contrast sensitivity function.”

 J.J. Koenderink, and W.A. Richards, Why is snow so bright?, J. Opt. Soc. Am. A, Vol. 9, No. 5, May 1992. 

We find that the explanation for the brightness of snow is a mixed physical, and psychophysical phenomenon. While the paper provided by J.J. Koenderink, and W.A. Richards go into great detail on the scientific methods that support these observations, I will provide a summary covering some of the interesting facts found in the paper. The three factors, aforementioned, are examined in a sequential manner, and the necessary conclusions derived accordingly.

The Scattering of Light

We begin with the law of darkening for the cloud cover. This involves intuitive observations we often make about the radiance or illuminance of the sky. The sky is not uniformly illuminated. This is quite noticeable depending on the elevation of our line of sight with respect to the horizon. Two factors are largely responsibly for the darkening that is usually observed from the maximum brightness we find at the zenith (point in the sky directly above us) to the grayish haze that we identify as the horizon:

“The angular distribution of the forward scattering (average differential scattering cross section) and the backreflectance to the clouds off the surface of the Earth.”

Light, or electromagnetic radiation, from the sun is scattered by particles in the atmosphere. This is commonly known as Rayleigh Scattering named after the British physicist Lord Rayleigh (Figure 2), a principle that describes the scattering of light by particles much smaller than the wavelength of the radiation.

John William Strutt.jpg
Figure 2. Lord Rayleigh

These particles can be individual atoms or molecules. The light from the sun is a mixture of all colors of the rainbow. Using a prism one can separate the “white” light from the sun to its different colors forming a spectrum (Figure 3). These colors are distinguished by their different wavelengths. Our vision is limited to what is known as the visible part of the spectrum ranging between red light at wavelengths of 720 nm to violet with a wavelength of 380 nm.

Figure 3. The visible spectrum (ROYGBIV)

In between, we have orange, yellow, green, blue, and indigo. The retina of the human eye has three different types of color receptors that are most sensitive to red, green, and blue wavelengths providing us the colored vision of our environment. On a clear cloudless day, we observe that the sky is blue. This is because molecules in the air scatter blue light from the sun more than they scatter red light. Meanwhile, at sunset we see the familiar red, and orange haze because the blue light from earlier has been scattered out, and away from our line of sight (Figure 4).

Figure 4. Why is the sky blue?

Similarly, forward scattering is a subset of radiation scattering which involves changes in direction of less than 90 degrees. In contrast, the effect of the backreflectance of the surface of the Earth is found to be largely independent of the visual angle of observation as the clouds of an overcast sky are roughly Lambertian. No matter from what angle the observer views a Lambertian surface, the brightness of the surface apparently is the same. Unfinished wood is known to roughly exhibit Lambertian reflectance, while a glossy/coated wooden surface does not. These two factors, forward scattering and backreflectance, contribute to the radiance of the sky, and the observed darkening of the sky from the bright zenith to the grayish horizon.

What about our eyes?

From here onwards, it is smooth sailing. The paper discusses the last two major factors including the reflectivity of snow and the average landscape albedo, and the observer’s contrast sensitivity function. It is found that the albedo of snow typically ranges from 80% to 95% across the spectrum with lower values for higher snow densities. Though snow is not a true Lambertian surface, the approximation is satisfactory. The landscape albedo figures into much of the calculations involved, and we find that it is only in extreme situations that the radiance of the snow is equal to the radiance of the horizon sky. In general, a whiteout (Figure 5),  is only possible if the reflectance of the landscape is above 50% which rules out most effective natural landscapes with the exception of snow itself.

Figure 5. Whiteout, a weather condition where visibility and contrast is severely reduced by snow (or sand). As can be observed, the horizon disappears completely.

Much of what is demonstrated in the paper shows that the contrast effect of snow can cause the sky at the horizon to appear darker than the zenith sky. But, the zenith sky is still found to be brighter than the snow, so why is it that we are not able to recognize this difference, and identify that the sky is indeed brighter than the snow? The answer is once again quite simple. The sky at the horizon is darker than at the zenith owing to the law of darkening described earlier. This results in a gradient over the circular dome above us, but one that is so shallow that the gradient is generally not noticeable to the comparative resolution of our eyes, thus leading us to believe that the snow is in fact brighter than the sky that illuminates it.


  •  J.J. Koenderink, and W.A. Richards, Why is snow so bright?, J. Opt. Soc. Am. A, Vol. 9, No. 5, May 1992.


On the Nature of Knowledge


So, after a week of thoughtful contemplation amid myriad deadlines, I’m excited to finally post my discussion “On the Nature of Knowledge.” I contested two methods of approach in presenting this topic: one that is grounded in philosophy, and the other that is inspired from my personal experience as a student. Ultimately, I’ve decided to stick with the latter as it would be consistent with how I’ve addressed most of the topics posted on this blog. For anyone wishing to tackle the same topic from a philosophical perspective, check out epistemology (the Stanford Encyclopedia of Philosophy provides an awesome introduction on the subject).

Our discussion will be divided into three separate parts dealing with the following questions:

(1) What is knowledge?
(2) What is knowledge from a student’s perspective?
(3) What is the purpose of knowledge?

Seems simple enough!

My objective today will be to share my personal experience and growth over the last seven years of my undergraduate and graduate studies, during which I actively and repeatedly engaged these questions. I’m well aware of the various generalizations that can be made in answering these questions, but my opinions will converge and revolve around the viewpoints I’ve accepted in my personal journey to discover those same answers as a student. Let’s begin!

What is knowledge?

I believe knowledge can be defined via three categories: personal, factual, and action-based knowledge.

Personal knowledge revolves about the knowledge gained by acquaintance with the objects, the events, and the people in one’s environment. Having just arrived in Canada for my undergraduate studies, the foundation of my life was built around the expectations and experiences I had with my family living in India, Egypt, and Sudan. Commencing my studies at the University of Alberta while living in student residence, working part-time and volunteering in various activities, my personal growth as an individual continued as I mingled and became familiar with an alien environment. My new-found freedom allowed me to fully experience and question my individuality, a process that would culminate in my identity crisis several years down the road (one that I have thankfully resolved). Knowledge, in this sense, is acquainted with my familiarity toward objects in my environment as well as the delegation of my recognition to said objects, and was highly influential in defining my identity and my decisions. Altogether, personal knowledge is very much a book in progress in our individual lives. Its measures and ends are dictated by our environments, personal motivations, and growth while actively influencing all three of those aspects.

Action-based knowledge is the knowledge of how to do something. This would involve one’s abilities to do something, like driving a car or starting a campfire.


On the other hand, factual knowledge, as is obvious, is the knowledge of facts. Action-based knowledge is different from factual knowledge. One may know the theory behind driving a car, while not actually knowing how to drive a car. Factual knowledge is evident in both action-based, and personal knowledge. With personal knowledge, in order to speak with others, one must  know how to communicate. One doesn’t necessarily know a person just by meeting them, one must also know a few things about them. Similarly, with action-based knowledge, one must know certain facts about driving, like the motion of the car with respect to actions on the steering wheel, to assist and help them actually drive the car.

Despite this, factual knowledge is alone not enough. Personal knowledge involves the need for action-based knowledge that helps an individual acquire the necessary skills to interact with his/her environment, and action-based knowledge may require some factual knowledge, but that same factual knowledge cannot amount towards action-based knowledge. In fact, one could say that there is no definitive standard of connection between these three categories of knowledge, seeing how much they intermesh. For the philosophy lovers, epistemology deals largely with the views of factual knowledge.

What is knowledge from a student’s perspective? 

How does this all come together for a student? Well, one of the main reasons we go to school is to cultivate our knowledge and understanding of the world. At university, this may largely be oriented by our aspirations on a field that would preferably model our future careers. I say “may” as I believe the purpose of higher studies does not have to primarily revolve about one’s career or prospective choice of employment (this in itself, leads to the crucial discussion on the structures of education or educational systems).


As a student, much of our time at university involves absorbing the factual knowledge before actually implementing them in the real world. Our action-based knowledge is attested to our success with such implementations. It is pretty similar to the notion of the scientific method, where theory precedes experiment in a repetitive cycle. This is where we also learn the difference between the static process of remembering knowledge versus the dynamic process of applying said knowledge. This is at the core of our ability to learn and interact with our environment, and is a social behavior whose roots are sown in our evolution as a species.

Factoring on to this is the personal knowledge that every individual inhibits. Being a student, you’re part of a community, one that we may or may not socialize with (each with its own share of circumstances). Putting aside the knowledge we gain from our courses, the personal knowledge we exhibit provides for the competitive play of our social lives from networking, to the establishment of our status, while satiating our thirst and drive for recognition.

All of which now leads us to ask, what is the purpose of knowledge in general?

What is the purpose of knowledge? 

Personally, to this day, I believe an individual’s knowledge is characterized not only by their ideas, but also how they act upon them. The question on the purpose of knowledge derives greatly from the means of education an individual may seek, which by itself, is an even bigger discussion.

I’ve come to recognize how influential the methods utilized to propagate knowledge at an academic institution can be on its community (teachers and students alike). After my four years of undergraduate studies, I was spent, and in many ways had to rediscover my personal creativity and motivation. Following a gap year, I pursued graduate studies, which I just recently completed. Looking back at my experience, I must say that a large part of my journey also had its run of the mill circumstances surrounding my identity crisis, but I cannot deny that it came with its share of new and enlightening perspectives involving my personal opinions on the educational systems of modern-day academic institutions.

What is the purpose of knowledge? I believe it is what it is, for every one of us, however we wish to see it.


 If there is one attribute to my personality that I have always been proud of, it would be my undying curiosity, and endless thirst for knowledge. In my life, this has changed from a wish to understand the world, to sharing said knowledge, and to contributing my own by enhancing the source of said knowledge. The Pensive Reverie is in fact a personification of my desire to share my knowledge, as an individual, to the world. Ultimately, as Francis Bacon put it, “Knowledge is power” but I also believe what we do with said power defines the object for each and every individual.

Electricity: Principles, and Applications!

Electricity is an ubiquitous phenomenon. It is now ingrained in the various facets, and activities of our daily lives, to the point where its existence, and influence is very much taken for granted, with nothing more than a modicum of appreciation, for the singular force that powers the technologies that serve as the foundations of modern-day society. So, what exactly is electricity?

Honestly, it’s a difficult question. In my opinion, one of the greatest delights of being a physicist, involves a deep admiration for the unknown, and an acknowledgment of my own lack of knowledge. It has motivated me to persevere, and strive hard to learn as much as I can about the world that we occupy, and its myriad mysteries.

Electricity is one such mystery.

It certainly is…

If I were to teasingly paraphrase Master Kenobi’s words,

“Electricity is what gives technology its power. It’s an energy field…it surrounds us, and penetrates us. It binds the galaxy together.”

In a way, this is true (a more precise statement would substitute electromagnetism for electricity), as we find electricity everywhere: from the lightning overhead, to the crackling static sparks of warm laundry, and even the functional impulses of the human nervous system. Electricity powers our world, and our bodies.

In this article, I’ll try to illuminate, to my best effort, the nature of electricity, its origins, and its practical applications.

Off to Miletus

Science finds its origins in the experimental method, which in ancient times, largely concerned the observation, and analysis of the surrounding world. The Greeks were stalwarts of both ancient philosophy, and science, and among them lived a philosopher of high regard, named Thales of Miletus (624-546 B.C.).


Thales was one among the legendary Seven Sages of Greece, a title given by ancient Greek tradition, to seven early 600 B.C. philosophers, statesmen, and law-makers who were renowned for their wisdom throughout the centuries.

Now, while the Greeks didn’t fully understand electricity, they certainly were aware of its existence. Thales is considered to have been the first human to have studied electricity. He found that by rubbing amber, or fossilized tree resin, with fur, he was able to attract lightweight objects like dust, and straw. He also noticed that lodestone (a naturally magnetic material) attracted bits of iron (magnetism is a close friend of electricity, but much about that later.) The word electricity is coined from the Greek word elektron, which also means amber. Thales’ work involved the first experiments of electrostatics, the study of stationary electric charges or static electricity.

Centuries would pass until electricity would find a foothold in modern science, and engineering. During this transition, and particularly in the 1700s, electricity was conceptualized to be a fluid. Familiar names such as Luigi Galvani, who asserted electricity to be the source of animation or animal motion, William Gilbert, an amateur scientist, who repeated Thales’ experiments, and Ben Franklin, who proved that lightning is electric in nature, and is constituted of positive, and negative elements, are among the many personalities who helped the scientific community form a clearer picture on how electricity works.

In the end, it was a French scientist named Charles Augustin de Coulomb, who summed up the work of his peers, and through his experiments, formulated what is now popularly known as Coulomb’s Law. 


Coulomb’s law states that like charges repel, and opposite charges attract, with a quantified electric force that is proportional to the product of the two charges, and inversely proportional to the square of the distance between them.

Despite all this progress, the fundamental nature of electricity still eluded the scientific community.

Enter the Atomic Theory

Matter, as we now know, is composed of atoms. An atom is in itself composed of subatomic particles such as protons, and neutrons, concentrated in a nucleus, and surrounded by orbiting electrons. (A particle physicist may offer a slightly different description, as we have now found that protons, and neutrons are also made of constituent particles called quarks.)

Scientists discovered the existence of electrons in the early 19th century. This discovery set the stage for the rise of subatomic theory, and the beginning of the modern era of electricity, followed immediately by a rush of advances in technology.


There are various types of materials, but in the world of electricity, there are two primary categories: electrical insulators, and electrical conductors. Electrical insulators are materials that don’t conduct electricity very well. Wood is a wonderful example of an electrical insulator. Material or matter interactions are predominated by the sharing or exchange of electrons. But insulating materials are very reluctant in sharing electrons. This is because the electrons in insulators are tightly bound to their atoms.

Conductors,as you may have guessed, allow for this interaction, as their electrons can detach from their atoms, and fly about freely. These loose or free electrons make it easy for electricity to flow through these materials, aptly confirming their namesake as electrical conductors. Most metals are conductors. The motion of electrons transmits electrical energy from one point to another.

This simple premise opens the gateway to the many applications of modern day electricity each of which was the answer to a fundamental question:

(1) How can we make electricity flow from one point to another? Generators

(2) How do we make electricity? Power plants

(3) How do we contain this electricity? Circuits


Electricity is the flow of electrons. A generator helps stimulate this flow, using a magnet! We’ve often observed how we can move paperclips, and small bits of metals about a surface using a magnet. This is the principle behind the working mechanism of a generator. The motion of the paper clip is in response to the motion of electrons induced by the magnetic field.

Electricity, and magnetism are equal proponents of the other, as by running electricity through a metal wire, one can form a magnetic field around the wire! Such observations are definitive of a link between electricity, and magnetism, which eventually culminated in the successful formulation of Maxwell’s Laws of Electromagnetism.

But for now, let’s focus on electricity! Ultimately, the generator is a device that uses a magnet near a wire or conducting material to create a steady flow of electrons, and is the foundation of a power plant where electricity is made!

Power Plants

Power is the rate of doing work. It is defined as the ratio of energy consumed per unit time. To cause a particular change in a system, a necessary amount of energy is required, along with a specified interval of time in which the change is allowed to occur.

In physics, it is common to confuse work with power but they are distinct quantities. Work is the net change in the state of a system. A person carrying a crate up a set of stairs does the same amount of work whether he runs or walks, but more power is required for running while carrying the crate up the stairs, as the work being done is accomplished in a shorter period of time.

Power plants make use of this concept. They work to provide electricity over a period of time. But to do so, a power plant requires a generator. Michael Faraday conceived an early form of a generator where coils of copper wire are rotated between the poles of a magnet to produce an electrical current. In order to rotate the disk, a crank was utilized. This would be similar to the motions of using a pencil sharpener.

Crank the handle!

These old fashioned pencil  sharpeners consist of a wheel, an axle, and a wedge. The handle serves as the axle that turns a wheel that is attached to the gears inside the sharpener to sharpen the pencil.

Now, imagine using a similar apparatus to crank out electricity for a neighborhood! It isn’t practical or viable! We would have to put a lot of work over a long period of time to generate even a reasonable amount of electricity! We have a generator, but the challenge is to apply the technology in an efficient manner to provide mass outputs of electricity.

In order to convert the input of mechanical (of cranking the handle) energy to a viable output of electrical energy, power plants seek the help of mother nature. There are many sources of electrical energy from hydro-electrical energy, to wind energy etc. All these technologies function using a fundamentally similar approach towards a common goal of producing electricity in mass.


Falling water has often been used as an energy source in ancient farms to modern day dams, and hydro-electricity plants, that use the enormous kinetic energy (or moving energy) delivered by falling water to crank out electricity. Engineers begin by building a dam across a river to create a reservoir. This reservoir of water is allowed to flow through the dam wall along a narrow channel called a penstock. At the end of a penstock, there is a turbine, or a large propeller. The shaft from the turbine goes up into the generator. When water moves across the turbine, the propellers spin, causing the shaft to rotate which in turn causes the copper coils of the generator to rotate. As these copper coils spin about the magnets, electricity is produced. Power lines carry this electricity from the plant to homes, and distant areas. Et voilà!

Senator Palpatine was the owner of a very powerful, and efficient electrical generator!

Now, while we have been successful in using a generator to “generate” electricity, there must be a means to contain this system of moving electrons. The answer to this involves the use of electrical circuits!

Electrical Circuits

An electrical circuit helps monitor the flow of electricity. A simple circuit would look like this:


Circuits are pretty much analogous to subway maps. The more complicated the circuit, the more complicated the map. During my early years in Edmonton, I felt quite confident about my ability to get around the city, using the LRT (Light Rail Transit). This was partly due to how simplified the system was, 


I remember proudly mentioning to Leina, my partner, that if I were to ever travel to Japan, I should not have a problem finding my way about the city, only for her to show me the Tokyo subway map, and challenging me to find a particular route:

_image1_1__8j-lA1396407638221 2774371_1339123849185.78res_480_236

My answer speaks for itself. But, just as we gain familiarity with our knowledge of our daily routes to work/school through the frequent use of public transport, by understanding the central principles of circuit theory (which sometimes, depending on how deep you want to go in the field,  may involve a good undergraduate degree in electronics or so), one may eventually find their way about a circuit board like this,

Breadboards are a great first step to getting your hands dirty with circuits!

Now, what does this all have to do with electricity? Circuits are necessary to monitor, and regulate electricity. No matter the source of electricity, be it a battery, a fuel cell, or a solar panel, the source of electricity generally has two terminals, a positive, and a negative terminal.

With reference to the simple circuit shown at the beginning of this section, electrons are pushed out of the negative terminal at a certain voltage (think of it as a force/pressure used to push the electrons, similar to how we may use a pump to push water out of a pipe). The electrons then flow from the negative terminal to the positive terminal through a conductor of choice (like copper wires). These wires form a closed path from the negative to the positive terminal, forming a circuit. A load, such as a light bulb, in the middle of the circuit may use the electricity flowing through the wire as a power source to generate light. While electrical circuits can get exceedingly complex, these basic principles of electron motion from the source generator, through a load, and back remain the same.

This concludes our discussion. Generators are the core mechanisms involved in making electricity, and are housed in power plants, which distribute the output electrical power to homes, and businesses, via power lines, and electrical circuits.

So what’s the point of all of this? 

The point is…electricity is awesome!

My Masters thesis focuses on a simple circuit involving what is called a Single Dielectric Barrier Discharge (SDBD) Plasma Actuator. While I could write a book (which I have indeed, namely, my thesis) on the device, and its mechanisms, a simple description should be good  for now.

An actuator is a device that converts an electrical input to produce a mechanical output (like the human body, neural “electrical” impulses from our brain, translate to our mechanical actions.) The SDBD plasma actuator does the same but does so using a medium known as a plasma, which is basically a soup of charged particles. Placing this device on an airplane wing, and turning it on, helps modify the airflow over the wing, reducing turbulence, and drag, while enhancing lift.

What’s this drag? For example, when you’re in a car, and you reach out the window, you can feel the force of the air against your open palm. This force is often referred to as the “drag” that your hand feels as air flows past it. It’s the same as when you walk through water, you feel its resistance, making your collective motions slower.

Airplanes are no different, feeling this frictional drag as they move through the atmosphere. The SDBD plasma actuator helps nullify this drag to a certain extent, aiding in the airplane’s motion through the air. But, in order to get the device to work in the first place, we need an electrical current! The SDBD plasma actuator is a Micro-Electro-Mechanical (MEM) System. Electricity is practically everywhere! 

My goals for this review had been to talk about this physical force that is the primary benefactor of our daily lives, and a central principle behind the future of a technologically advanced human civilization. I hope I haven’t left anyone behind in the explanations provided above. I’ve tried my best to make the discussion concise, and enjoyable for those with, and without a scientific background. I hope everyone enjoyed reading this article!


  • “Electricity.” Britannica Encyclopaedia. August 22, 2016
  • Young, Hugh D., Freedman, Roger A., Ford, Lewis. University Physics. 2008.
  • Gundersen, P. Erik. The Handy Physics Answer Book. Visible Ink Press. 2003.


“An Incomplete Eloquence”

An Incomplete Eloquence – a pretty interesting article on the use of marginalia, and a reader’s relationship to a book.

I don’t agree with all the points made by the author. After all, it is quite possible a person who defers the use of marginalia, isn’t necessarily failing to build a “relationship” with the book, nor is guilty of not having “used” it well.  Simply, the book may just be boring, inciting no particular inspiration in the reader. It may also be a personal preference of the reader, who in reality, may enjoy an interesting read, and find the necessity to pause, and collect their thoughts rather distracting.

The article was a pleasant coincidence, as I’ve spent the past month raking in a variety of book purchases amidst the summer sales at Chapters (the bookstore), and been recently debating between either using marginalia in those books or to document my thoughts in a separate journal! For now, I’ve decided to use a separate diary to compile my ideas, and analysis of the passages on the books I’ve read.

Nevertheless, I must admit there is “An Incomplete Eloquence” in the extensive use of marginalia that I myself utilized to a great extent throughout the course of my undergraduate studies.

I can testify that the content of a few of my undergraduate physics books in quantum mechanics, or statistical mechanics are pretty similar to this one, and could basically describe a book within another!