Lithium-Ion Batteries: Paving the Way to Sustainable Energy Storage

Lithium-ion batteries are the new big thing in the power storage sector, surpassing lead-acid and nickel-cadmium batteries with their state-of-art design and revolutionary mechanism. It is used widely in multiple appliances and has established itself as the domineering powerhouse of rechargeable batteries. Here is a breakdown of how this particular battery functions, and why its capacity reduces over time.

Lithium-Ion Battery Inside a Phone || Image Courtesy

 

Roles of Anode, Cathode, and Electrolyte in Charging and Discharging

Like any other battery, the lithium-ion battery consists of an anode and a cathode. However, unlike what you may have learned in the electrochemistry chapter at school, this is no single-element electrode. In a lithium-ion battery, the anode (negatively charged electrode) is made of graphite with lithium ions intercalated in between its flat hexagonal rings, which are separated by a relatively large distance of 340pm. Essentially, graphite acts as a storage space to house lithium atoms.

It is already known that lithium, being an alkali metal, readily loses its outermost valence electron to form a lithium-ion (Li+). This occurs to the lithium atoms present in the graphite, and as they release electrons, the lithium atoms move out of the graphite. Subsequently, the released electrons will move out of the anode and into the circuit to various electrical components in the device, which require current.

Now let’s look at the cathode (positively charged electrode). There are different cathodes available, but for now, let’s take a cathode containing cobalt and oxygen: cobalt peroxide (CoO2). Cobalt will lose an electron, which is taken by oxygen, and so Co gets oxidized to Co+4. So now, the cobalt atom is missing an electron, which it wants back. Hence the released electrons from the anode will reach the cathode and satisfy the cobalt atoms growing shortage of electrons.

 However as more and more electrons reach the cathode, the negative charge of the cathode increases, and since we know like repels like charges, electron movement will start to slow down. Another issue is that the Li+ ions also want electrons back (i.e. they want a balance of charges). This is solved, by adding an electrolyte with an organic solvent in between the anode and the cathode. The electrolyte is generally a lithium salt, such as LiPF6, LiBF4, etc, mixed with ethylene carbonate or dimethyl carbonate. The electrolyte allows only ions to pass through it, hence Li+ ions will pass from the anode to the cathode. Now the lithium-ion will wedge itself within the cobalt and oxygen structure of the cathode, forming lithium cobalt oxide. Remember that the electrons moving from anode to cathode externally will be gained by the cobalt ion (Co+4) and not the lithium-ion.

This describes the whole process of discharging. When you use your phone or laptop, the electrons move externally from anode to cathode, and lithium ions will move internally through the electrolyte from anode to cathode too. As this process keeps occurring, your battery percentage level decreases on your screen. When all lithium ions have moved to the cathode, and there is no external electron flow- that’s when your battery is dead and there is no power in your phone.

But phones and laptops are not use-once-and-throw devices. You can reuse them again and again and again. How is this done? Charging. When you plug your device into a USB charger, the charger provides electrical energy to the extent that it pulls electrons back from the cobalt atom and sends those electrons back to the anode. Lithium ions will also be kicked back to their initial home, back to the graphite structure of the anode. When all the electrons and lithium ions return to the anode, that’s when we say the battery is fully charged with 100%.

If you notice one thing, charging and discharging are exact opposites of each other, which lead to an interesting observation: lithium ions move back and forth through the electrolyte in this battery. It’s almost like a swing. That’s why lithium-ion batteries are also known as swing batteries.

 The Inside of Li-Ion Battery || Image Courtesy

                                                                                                 

Extra additions: Separator and Collectors

Now that we have understood the purpose of the three main components (anode, cathode, and electrolyte), and how they work together, let’s add a few more components to enhance this battery even further. Let’s start with the non-conducting liquid semi-permeable separator. This layer is lodged in the middle of the electrolyte and it prevents the anode and cathode from touching each other. In the case that such contact does happen, things can get explosive, as the reaction will accelerate uncontrollably. Next, are the collectors. To increase the conductivity of anode and cathode, a conducting collector plate is placed inside them. On the side of the graphite anode, a copper layer is placed and on the side of the cobalt peroxide, an aluminum layer is placed.

Why does a device’s battery capacity reduce over time?

Have you ever noticed that, over the years, your phone’s battery gets depleted faster? There are very simple reasons for this. The first possible problem is, that when recharging, incoming electrons and lithium ions can react with the electrolyte and organic solvent to form SEI (solid electrolyte interface). The formation of this compound permanently consumes lithium ions, making less of them available. Less the lithium ions, more quickly discharging happens.

The first cause of issues is not in your hands, but the second reason that I am about to disclose is. If you let a battery fully discharge until it is at 0%, or dead, you have essentially waited until all lithium ions reached the cathode side. Now here, some lithium ions will combine with oxygen directly to form lithium oxide and some cobalt atoms will combine with oxygen directly to form cobalt oxide. This again, is irreversible, leaving fewer lithium ions available for charging and discharging.

To prevent this from happening, it is recommended to charge your phone or laptop when it is already at 20-40%, and not wait for it to drop dead before you decide to plug it in for charging.

The Black Hole Information Paradox

The black hole information paradox is one of the most notable paradoxes in the history of physics. All of it started, when a young physicist named Stephen Hawking, published a quirky paper on the evaporation of a black hole. Little did he know, it would transpire into one of the most hair-scratching paradoxes of all time, fuelling countless breakthroughs in a variety of fields in physics.  What is this paradox, and why did it stir such excitement and perplexity in the physics community?

Digital representation of a black hole || Image Courtesy

Let’s start with the basics. Black holes, as we know, are unfathomably destructive objects that exist in space, due to the very strong gravitational force they exert. Black holes form when a star that is heavy enough collapses in on itself. This causes large amounts of mass to get compressed down to a tiny point of infinite density, also known as a gravitational singularity. To give you some perspective, for a regular stellar-mass black hole to be created, one must concentrate 10 suns into a mere point of no measurable size. That would be one really dense point.

Visual representation of the extent of warping of space-time curvature by a black hole || Image Courtesy

So, how does a black hole possess such a strong gravitational force? Einstein’s theory of general relativity states that gravity is caused by warps and curves in the fabric of space-time. If the warp is deeper, the gravity around the body causing the warp will be more. Hence, the singularity of a black hole, due to its infinite density, creates an insurmountable dent in space-time, resulting in unparalleled forces of gravity near it. So anything that passes the event horizon of a black hole, including light, has no chance of return. 

Now let’s go to the basics of quantum theory. As you know, every body in the universe is made of subatomic particles. You can describe the nature of these particles by measuring their spin, charge, etc. This is known as the quantum properties (i.e. information) of the particle. Since every body is made of those particles, we can extrapolate this idea to say that every body is described by its unique quantum information.

Now that you understand what information we are dealing with here, let’s move to the most fundamental theory in quantum mechanics – the conservation of quantum information. If you’ve studied the conservation of energy in high school physics, then this should be a piece of cake for you.

The conservation of quantum information simples tells us that quantum information can neither be created nor be destroyed. This theorem is also known as the no-hiding theorem.

Now that you’ve got a vague idea of general relativity and quantum mechanics, let’s get closer to the actual paradox.

Imagine a black hole. We know that any object passing the event horizon of the black hole will practically get sucked into it. The object will be fully destroyed by it, but the quantum information of the body won’t be destroyed. The information will be out of sight and hidden, but it still exists, just in a place we can’t see it. Therefore, the no-hiding theorem is not violated. The black hole will store that information inside it, and it will perpetually grow in size as it consumes more objects.

This was the initial idea. Everything was perfect. No quantum information was destroyed, and no rules were violated. Nothing to worry about at all…

Until of course, in 1974, Stephen Hawking combined general relativity and quantum mechanics in his paper (a very risky move indeed). Allow me to break down what he proposed.

In the vacuum of space (all around us too), there are pairs of virtual particles that pop into and out of existence very quickly. Vacuums are actually seething with action, with all these virtual particles popping into existence in pairs by taking energy from the vacuum. They quickly annihilate each other and return the energy they took back to the vacuum. This is known as quantum vacuum fluctuations.

Similarly, if this very phenomenon occurs near the event horizon of a black hole, one particle will go into the black hole while the other will escape away from it. But we already know for these particles to pop into existence, they need to take energy from something. In this case, they take the energy from the black hole. Since the particle pair part ways in opposite directions instead of annihilating each other, no energy taken is returned to the black hole. From Einstein’s famous equation e=mc2 (cis the speed of light in a vacuum), if energy (e) reduces, then mass (m) also reduces. As a result, the black hole will slowly evaporate and disappear altogether.

Representation of quantum fluctuations in the vacuum of space and near the event horizon of a black hole || Image Courtesy

Let’s stop right there. If the black hole disappears, then the quantum information of all those objects stored inside the black hole will disappear as well. But that is a direct violation of the most fundamental quantum theory we discussed earlier – no-hidden theory. Quantum information cannot be destroyed! But it just happened. This is the black hole information paradox.

Physicists were baffled and stunned by this paradox but were determined to research more to understand what was going on. Nobody knew whether the information was actually lost forever or was somehow leaked out of the black hole before it disappeared. Hawking even had a bet with two other renowned physicists, Kip Thorne and John Preskill. Hawking and Thorne stated that the quantum information had to be destroyed, which meant that the rules of quantum mechanics had to be changed to fit this observation. On the other hand, Preskill said quantum mechanics showed that information must escape, and so the rules of general relativity should be rewritten. Sounds like a paradox on it’s own 😉 !

Preskill, Thorne, and Hawking (Left to Right) || Image Courtesy

However, in 2004, Hawking conceded his bet and agreed that black holes do leak information, so no information is destroyed (no rules violated). Preskill won the bet, and as promised, the winning side of the bet was to receive any encyclopedia to “retrieve information at will” (great joke). And yes, Hawking did indeed gift Preskill a baseball encyclopedia.

Overall, it was found that Hawking’s radiation is not entirely thermal, but also contains encoded information about the interior of the black hole. Other postulates, such as the information moving into another universe altogether as the black hole disappears, information escaping just before the black hole disappears, etc. , were suggested. However, they all have their own contradictions and flaws. Last year, two papers were published suggesting that after half the mass of the black hole evaporates, a wormhole is created, connecting the inside of the black hole with the wider universe. I won’t be getting into too many details unless you want to spend a rather long time reading them.

However, the black hole information paradox will always remain slightly unsolved until we have a proper theory of quantum gravity, which describes gravity in terms of quantum mechanics. It will also contribute to the ultimate goal for a theory of everything.

We are only at the beginning of understanding the mechanics of these calamitous objects we refer to as black holes. Throw quantum mechanics into the mix, and we know that we still have a long way to understand how exactly information escapes out of this abyss of no return.

————————————————————————————————–

What makes glass transparent?

What makes glass transparent?

Welcome to my first article in the all-new ‘The Beauty of Physics in Our Daily Lives’ series that I have started. After receiving many requests and suggestions, I have decided to take it down a notch, and explain everyday phenomena that are simple and clear to understand for anyone. Of course, I will continue to write about mind-boggling phenomena about space, time, and quantum physics. But this series is for those of you who enjoy the simplicity and beauty of the small things you see around you. Hope you enjoy it!

                                               ——————————

Glass. We see it all around us, in our everyday lives, and it plays a significant role in everyday objects. Buildings have glass windows, spectacles have glass lenses and even the device you are using to read this article has a glass screen. But have you thought about why glass is the way it is? What makes it glass transparent, but other materials not so much? Let us find out.

Let’s start with the origin and how glass is made. Glass is made from quartz, which is a chemical compound made from one part silicon and two parts oxygen (SiO2). Quartz can be extracted from the earth’s crust. It is most commonly found in sand. Now quartz is a solid and hard crystalline structure at room temperature. The silicon and oxygen atoms are connected by covalent bonds (bonds between two non-metal atoms) in a gigantic tetrahedral structure – which is why it is called a macromolecule. When light hits a quartz crystal, it is dispersed in different directions.

To prepare glass, quartz is melted at extremely high temperatures near 1700˚C. The energy from the heat supplied during melting breaks the bonds holding these atoms in their regular and solid crystalline structure, making the atoms free to move in a close range, in a liquid state (the same way ice is melted into water). The melted quartz is then cooled to form glass.

But here is the funny thing about molten quartz – it does not come back to its crystalline solid form on cooling, the way water would return to ice if it was cooled. This is because quartz, when cooled, becomes a solid with the chaotic properties of a liquid. It does not return to its original crystal lattice form. This is what we call an amorphous solid. This property is what makes glass look uniform too.

Difference between quartz and glass structures || Image courtesy from LINK

All right, so now that you know about how glass is made and its amorphous properties, let’s look at the real reason why glass is transparent.

An atom of any element or material, the subatomic particles only take up a tiny space, leaving the rest of the atom empty. So you would think, that since atoms are made of 99.999% empty space then why can’t light just pass through those spaces. Then all objects around you would be transparent right? So why are there still some opaque and translucent objects?

The real answer lies not in the empty space, but in the different energy levels of electrons in atoms. Let me break this down for you. I’m sure you all would have seen diagrams of atoms in school textbooks – the nucleus in the center, and these orbits carrying electrons zipping around that nucleus. Now, these orbits are also called ‘energy levels’ in science. The electrons in orbits furthest away from the nucleus have more energy, and the electrons in the orbit closest to the nucleus have lower energy.

Energy diagram of an atom || Image courtesy from LINK

Electrons also have the power, to jump from their own energy level to other energy levels. This ‘jump’ is known as the excitation of an electron to a higher energy state from its ground state (its original energy level). Electrons can perform this jump when supplied with the required energy, and the amount of total energy required for this to happen is known as the ‘energy gap’. The higher the energy gap, the more energy required to ‘excite’ the electron and vice versa.

Now, light is made of subatomic particles called photons, which happen to also carry energy. The light, when shone at any object, has two choices – either give its energy to the electron for jumping or be allowed to pass through, but there will be no excitation of electrons. The deciding factor between these two choices solely depends on the energy gap. Energy can only be given to these electrons in discrete packets. So the photon either has enough energy to give, or it does not give any energy at all. The electron cannot exist in between energy levels. So it either remains in its ground state or excited state, with no in-between.

In non-transparent objects, when light passes through these atoms, the photons can give energy to the atom’s electrons to make them jump to higher energy levels. This is possible because the energy gap in opaque objects is low, so these photons have enough energy to give to the electrons for their jump.

But, if the photon gives away its energy in helping these electrons, it gets absorbed into the material. So light can’t pass through the material altogether, making it opaque. In short, opaque materials pick the first of the two choices.

Transparent materials (i.e. glass), on the other hand, have a higher energy gap. This means that the electron needs a lot more energy to jump to a higher level, more energy than the photons in light can provide. The photons here have energy, but just not enough to help the electrons to jump to their excited state. Remember what I told you about the electron not being able to exist in-between ground and excited state with just the partial amount of energy that this photon was able to give. So the electron simply doesn’t take any of the photon’s energy and lets it pass through the atom. The photon, hence, does not get absorbed and light gets passed through!

So there you have it, the true reason as to why glass is transparent, and other objects are not!

With this, I shall conclude my first article in ‘The Beauty of Physics in Our Daily Lives’ series. If you enjoyed this article, feel free to let me know.

I am also going to be giving an opportunity for you, yes you – the random person reading my article, to ask any question about physics/chemistry in your daily lives! It can be like “What makes an aero-plane fly? Why do birds not get electrocuted when sitting on telephone wires?” and many more 😉

So come on, go outside and question everything you see around you! Just like Albert Einstein, one of the greatest minds to ever walk the earth, said, “When you stop learning, you start dying.”

—————————————————————————————————-

The REAL physics behind time-travel in Avengers: Endgame

The REAL physics behind time-travel in Avengers: Endgame

In the catastrophic events of Avengers: Infinity War, Thanos had successfully eliminated half of all life throughout the universe, then destroying the six infinity stones. In the follow-up film, Avengers Endgame, the remaining heroes assemble once more to avenge the fallen. They travel back in time to acquire the six stones before Thanos gets them, bringing the stones back to the present, and using them to resurrect everyone who disappeared earlier.

Cinematic Poster from Avengers: Endgame || Image Courtesy Strange Harbors

Although it is just a movie, I thought it would be interesting to analyze the scientific concepts running through the film. Was there some truth to the Avengers’ time travel, or were they just spewing scientific gobbledygook for the sake of entertainment? 

You cannot travel to the past, only to the future

So far, the only two possible proven ways that one can travel through time are either by traveling near the speed of light or by standing near an extremely strong gravitational field.

The former, known as velocity time dilation, is based on Einstein’s theory of Special Relativity. The idea is that the faster you move through space, the slower you move through time. So one year of traveling at the speed of light (3×10^8 m/s) would correspond to ten years on earth.

The latter, known as gravitational time dilation, is based on Einstein’s theory of General Relativity. His theory leads to the idea that gravity can bend time (and space), so as the strength of gravity increases, time tends to slow down. So if you were to stand near a black hole, time would slow down significantly more for you than someone on Earth.

Either way, you can only slow down time for yourself, whilst time passed regularly everywhere else. So technically, you are still traveling into the future, just slower than everyone else. Thus time-travel is possible into the future, but never into the past. This is because the fundamental characteristic of ‘time’ is that it can only flow in one direction, which is forward. So, the Avengers really could never travel back in time. Sorry team!

Changing the past does not change the future

Even if the Avengers did, somehow, travel to the past, nothing they do in their past timeline will change their present one. This was explained by Smart Hulk, to Rhodey, in the movie too.

Smart Hulk (Mark Ruffalo) from Avengers : Endgame || Image Courtesy Indian Express

The famous ‘Grandfather Paradox’ highlights this very problem, which could occur in time-travel. 

Suppose you went back into time by 70 years and killed your grandfather before he married your grandmother (it’s a terrible thought, I know). The problem is, 70 years ago, you would have never been born. So if you were never born, then how could you go back and kill him. You would have never existed in this timeline, so you simply can’t kill him. 

In Avengers: Endgame, Tony Stark travels back to the 1970s to steal the space stone/Tesseract. If he did steal it, then how could his father, Howard Stark, studied it to make the Arc Reactor Technology, which ended up keeping Tony alive in the future. This is one of the best examples of the Grandfather Paradox in the movie. Not to mention that if Tony stole the Tesseract, the Battle of New York in the first avengers movie would have never happened either.

Tony Stark (Robert Downey Jr. ) meets his father from the past, during his time travel || Image Courtesy Republic World

Quantum Mechanics and the ‘Many Worlds’ theory turn the tables

Now here is the game-changer. Since the Avengers conduct their ‘time heist’ in the framework of the quantum realm, they can completely sidestep the ‘grandfather paradox’ problem. Here is how.

Quantum mechanics is the study of the behavior of subatomic particles. In quantum mechanics, there is a phenomenon known as superposition, where a subatomic particle, such as an electron, can exist in two different places at the same time. We can describe these electrons using probabilities, to show how likely it is to find an electron at a particular place, at a particular time.

Now, David Deutsch (British physicist), whose name also happened to be mentioned by Tony Stark in one of his dialogues, combined this probabilistic property of subatomic particles in quantum mechanics with the ‘Many Worlds Theory’ and said that we can make the grandfather paradox vanish if we expressed time-travel probabilistically (he even ran a simulation using this, and it succeeded).So if we looked at the grandfather paradox through the eyes of quantum mechanics, there is a 50% chance that you did kill your grandfather, and there is a 50% chance that you didn’t. So we can technically come to a compromise (it is confusing, I know) 

This may be a lot to digest, but hear me out. Think about it like this – if a particle can exists in two positions simultaneously, then we can say that many different pasts and futures of different probabilities of happening, can also exist simultaneously. Now if we bring in the ‘Many World Theory’, we can say that every possible past and future exist in alternate parallel universes or timelines.

Now let’s make the connection to Endgame. The Ancient One states the very same thing to Bruce Banner. If you change the past, you would create an alternate timeline branching from the original one, and this parallel timeline would remain unaffected by their original timeline. So again, either way, nothing they do to change the past would affect their present timeline. This logic eliminates the possibility of going back in time to kill Thanos as a baby, as even if they did succeed in doing that, it would not change their future, rather the future in an alternate timeline.

The Ancient One (Tilda Swinton) explains the consequences of branching timelines to Bruce Banner (Mark Ruffalo) || Image Courtesy Quora

The parallel timeline idea, however, did work to the Avengers’ favor in a few scenarios. At the end of Endgame good Nebula from the future (2023) kills the bad, old Nebula from the past (2014). However, when the 2014 Nebula dies, the 2023 Nebula seems unaffected. This is because when the 2014 Nebula traveled to the future, she created a split timeline. Since these two Nebulas are a part of different timeline which are independent of each another, what happens to one Nebula cannot affect the other.

Nebula (Karen Gillan) from Avengers Endgame || Image Courtesy NBC News

For me, one of the most mind-boggling time paradoxes arose when Steve Rogers (Captain America) returned the stones in the past, at the end of the movie. Instead of coming back after returning the stones, Steve decides to stay with his love – Peggy Carter. The thing is, here he does not create an alternate timeline, he just waited out his own one. So that means he was always married to Peggy. So as long as Steve did not interfere with anything in the past that would bring the Avengers to their final fight against Thanos, his timeline remained stable. Meanwhile, 2012 version of Steve still would have woken up from the ice after the second world war. That technically means that two Steves would have been alive at the same time. Really weird, huh?

Steve Rogers (Chris Evans) stays in the past to get his long-awaited happy ending with Peggy Carter (Hayley Atwell) || Image Courtesy Screen Geek

Since I am not an expert in this subject, I won’t get further into the details of it. Most of these concepts are theoretical and sometimes I find it confusing myself, so you aren’t alone there. These are all the points and paradoxes I could think of under my circumstances. There may be some things that I could have missed, but feel free to let me know if you think of another idea.

At the end of the day, the Russo Brothers did put together a wonderfully crafted cinematic experience, using ideas of science to guide them in making a logically coherent movie too. The action-packed blockbuster movie won the hearts of many of its fans. But it did also get a few to speculate beyond. To look further. To look deeper. Who knows, for maybe in a few centuries, a time machine could be invented. And maybe, just maybe, it could all have been inspired by the creative ideas from Avengers: Endgame ;).

————————————————————————————————–

P Versus NP

 P Versus NP: The Most Notorious Unsolved Problem in Computer Science

P versus NP is considered one of the most important unsolved problems according to many mathematicians and computer scientists. 

But, before diving into the actual problem, you should know the context and history of this problem. Although traces of this problem were speculated openly through history, Stephen Cook, who was an American-Canadian computer scientist and mathematician, in his seminal paper, formally defined the problem in 1971.

Many consider P versus NP the most important problem in computer science. To make things more exciting, P versus NP is one of the seven Millennium Prize Problems selected by Clay Mathematics Institute and whoever provides the first correct solution to any of these problems, will get a cash prize of $1,000,000. But here’s the catch – these problems are extremely tough, near impossible, to solve and that’s stating it lightly. Only one problem, the Poincaré Conjecture, has been solved so far, whilst the remaining have remained unsolved, since the twenty years of their official announcement.

Now, let us take a look at what the P versus NP problem is all about.

P versus NP is an infamous unsolved problem in the field of mathematics and computer science, and to put it in a nutshell: Is it possible for every solved problem whose answer can be verified quickly by a computer, also be quickly be solved by a computer?

‘P’ refers to those problems that are fast (hence easy) for a computer to solve. ‘NP’ refers to those problems that are fast (hence easy) for a computer to check but not necessarily easy to solve in the first place. You will have to keep this in mind, so here is a shortcut to remember it – ‘P’ = fast to solve and ‘NP’ = fast to check.

Visual depiction of P versus NP Problem || Image courtesy from Medium

Take prime numbers, for an instant. Prime numbers are numbers only divisible by themselves and 1. Now, it is easy to check whether a number is prime or not, but it is hard to find all those numbers in the first place.

So why should we care about this whole P versus NP problem? Well, this problem can be applied in our real world too. Let us imagine, that you are the head of security of a large museum. The national museum has over 40 rooms, and it holds many extremely rare and important artifacts. To ensure, that any thieves do not steal these artifacts, you are given orders to install cameras to keep an eye on every artifact. You want to know if 200 cameras are enough, such that each artifact can be seen by at least one camera.

So, let’s imagine, to satisfy this condition, we need 150 cameras. Now, since I have told you the answer, it is really easy for you to check if all these 150 cameras do, in fact, cover all the artifacts in the museum. However, if you did not know this answer, and you had to calculate it from the beginning yourself, it would surely take a lot of time. This is one of the simplest examples for the application of P versus NP problem in the real world

But here is the best part. Many such NP problems remain hard to be solved easily. But even if we find one solution or one easy method to use to solve even one NP problem, it can be applied to all NP problems, especially NP problems that computers cannot solve. It could be a definite game-changer in computer science and mathematics.

Now that you understand its significance, let us see the conclusion drawn by the science and mathematics community. The real questions is, are NP problems different from P problems, or are NP problems just the same as P problems? If the case is the latter (P=NP), then that could mean there are possible easier and faster problem-solving methods that do exist for problems that are difficult for computers to solve, and we just have not discovered them yet. However, if the case is the former (P≠NP), then no matter how hard we try, NP problems will forever remain to be too difficult to solve.

So far, there are no easy or fast solving methods for NP problems, which lead mathematicians and scientists to think there are NP problems other than P problems. However, there is no mathematical proof for it. Thus the search for solid evidence continues, as P versus NP remains one of the most mind-boggling and difficult problems to comprehend, let alone solve!

                                             ————————————-

Progress in Superconductor Study Shatters Past Records

Earlier this month, a breakthrough in room-temperature superconductors shook the department of condensed-matter physics. Prof. Ranga Dias, and his colleagues from the University of Rochester and University of Nevada Las Vegas, have observed superconductivity in a hydrogen-dominant material at 15°C, under high pressures. The material used as the superconductor was a carbonaceous sulfur hydride (a mixture of carbon, sulfur, and hydrogen).

The lab where the superconductor was tested at Rochester || Image Courtesy

Electrical resistance in regular wires occurs when free-flowing electrons collide with the atoms that make up the metal. However, it was speculated that low temperatures could create vibrations in the metal’s atomic lattice, which in turn draws electrons into pairs known as Cooper Pairs. These Cooper Pairs move as a cohesive stream and don’t bump into any metal’s lattice, thereby experiencing no resistance. This is what makes a superconductor, which are materials that carry electrical current with zero resistance. But if the temperature increases, the electrons gain energy and break free of their Cooper Pairs, ruining a superconductor. Many devices use superconductors, such as MRI scanners, particle accelerators, and quantum computers, to name a few. 

But the major problem is that superconductors only operate at extremely low temperatures, the lowest being 0K (-273°C). Helium is required to maintain these temperatures, making the process expensive, as helium is a limited natural resource too. So, condensed-matter physicists were thinking of ways for superconductors to function at room temperature, which is sustainable and cost-effective.

That is what makes this discovery very important. Not to mention that the temperature for the operation of this new superconductive material, 15°C, is coincidentally equal to the average surface temperature of the earth. The synthesis of this material was extremely technical too. Here is a summary of how Dias’ team achieved this and the criteria to create superconductors.

The two most basic criteria to make a superconductor is the usage of light elements, which have strong bonds between them, to strengthen the Cooper Pairs. Hydrogen is the lightest element and hydrogen bonds are quite strong, making it the perfect candidate. So, metallic hydrogen would be a perfect superconductor at room temperature (the temperature problem was partially solved), but extremely high pressures would be required to succeed at this. When pressure is applied to hydrogen, the structure gets changed to achieve certain threshold conditions, which can turn hydrogen metallic.

However, the whole aim was to make superconductors more feasible, which means we must find a way to reduce the pressure too. So it was suggested that a “hydride” (a mixture of hydrogen and another element) was used instead of just pure hydrogen, which would deliver the same superconductive properties at even lower pressures.

Many physicists worked on new combinations of elements for superconductive materials. Much progress was made, but Dias’ work has shattered all previous improvements. His team tested many permutations of elements, searching for the perfect ratio of hydrogen in their material. This part was very crucial, as too little hydrogen meant that the compound would not superconduct very well and too much hydrogen would cause the compound to behave too much like metallic hydrogen, leading to the problem of the extreme pressure again.

Finally, they hit bull’s eye, obtaining the perfect compound, which was formed by a photochemical reaction between methane (natural gas containing carbon) with hydrogen sulfide. After this, they put their compound in a diamond anvil to test whether it is a superconductor. A diamond anvil is a high-pressure device that compresses a small piece of material at very high pressures between two diamonds. The compound had passed all the tests for superconductivity too, giving researchers a run for their money. 

Image of a Diamond Anvil || Image Courtesy

While the temperature is correct, the pressure still is not low enough for practical applications. Although Dias’ team have an idea of the chemical composition of their compound, they are not sure about the actual structure of the compound, since the sample they take is extremely small (in Pico liters = 10-15 liters) and with such light elements being used makes it hard to determine it too. New techniques are being developed as we speak. With knowledge of the structure, they can chemically fine-tune the compound by maybe changing the ratio of the atoms or swapping an element, to further reduce the pressure at which it can superconduct.

Heads are turning up with the introduction of carbon into the mix, and physicists feel more confident that with all this research and focus, superconductors can hit real-world applications soon. And when they do, they will completely revolutionize the way we perceive the electrical world, altogether. 

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Writer’s Note:

Thank you all for reading my article, I truly appreciate it! Feel free to subscribe to my blog, so you will get a notification in your inbox whenever I post another article. Like and share 🙂

Have a lovely day and keep learning!

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Black Hole Breakthrough Bags This Year’s Nobel Prize

Black Hole Breakthrough Bags This Year’s Nobel Prize

With a winning streak of two ‘Nobel Prize in Physics’, the department of astronomy has set the bar high yet again. Last year, the award was given for the discovery of an exoplanet orbiting a solar-type star. This year, the award has been given for “the discovery of a supermassive compact object at the center of our galaxy”. This ‘compact object’ that is being referred to is a black hole.

Half of the Nobel Prize was given to Roger Penrose, an English mathematical physicist. Reinhard Genzel, a German astrophysicist as well as the co-director of the Max Planck Institute for Extraterrestrial physics, shared the other half of the prize with Andrea Ghez, professor in Department of Physics and Astronomy at UCLA.

So, what was so amazing about 2020’s Nobel Prize in Physics? Here is everything you need to know about the groundbreaking research done by these physicists and their teams, finally driving the nail in the coffin for the existence of supermassive black holes in the middle of our galaxy and others similar to our own.

Winners of Nobel Prize in Physics 2020 – Prof. Roger Penrose, Prof. Reinhard Genzel and Prof. Andrea Ghez || Image Courtesy https://physicsworld.com/a/roger-penrose-reinhard-genzel-and-andrea-ghez-bag-the-nobel-prize-for-physics/tesy

Roger Penrose and Stephen Hawking predicted the theory suggesting the existence of black holes. The duo had used Einstein’s theory of general relativity to mathematically prove the existence of these astronomical objects. However, black holes were not taken seriously back in the 60’s and 70’s, as they were very theoretical. There had to be evidence, something to back up their postulation, if it were to hold significance in the science community.

So, as the years passed, and there were more technological advancements, making data collection in astronomy more feasible. While Penrose had secured the theoretical aspect, Ghez and Genzel took care of the observational part, hoping to prove that there are truly supermassive black holes in the center of out Milky Way. Both physicists used infrared telescopes. The reason for choosing infrared specifically is because its longer wavelength enables it to get by the dust that is found very concentrated in the center of our Milky Way. They used these telescopes to track the position and velocities of stars near the center of our Milky Way, using which we can determine the orbit of those stars, and consequentially the mass of the object it orbits.

From 1992 to 1996, Prof. Reinhard Genzel and Prof. Andreas Eckhart used the NTT (New Technology Telescope) in Chile to observe the position and motion of ten stars. With the collected data after four years of research, they estimated the mass of the object at the center to be 2.45 (±0.4) milliontimes the mass of the sun. They were pretty sure that it was a black hole.

But science is not very forgiving, and one piece of evidence is not enough to prove such an abstract theory. Therefore Prof. Andrea Ghez and her team at UCLA did the same but increased their precision using the high resolution Keck Telescope at Hawaii. This telescope uses lasers, which observes the turbulence in the atmosphere in the center of our Milky Way. That turbulence actually disrupts the light emitted by the stars, which they were observing. The method is to subtract the turbulence from the observations, which results in way more accurate data. Prof. Ghez and her team observed over 500 stars for a long 12 years, from 1995 to 2007. Finally, they calculated the result for the mass of the object in the center of the orbit of these stars at the center of our Milky Way to be 4.5 (± 0.4) million times the mass of the sun.

I want you just take a second and let that number sink in. It’s hard to even imagine it, isn’t it? It was a black hole. No doubt. Anything greater than five solar masses had the power to collapse into a black hole. So the number received by both Ghez and Genzel were utterly shocking, and too crazy to be true.

The discovered black hole was indeed a supermassive black hole, named as Sagittarius A*. The first image of a black hole in 2019 only gave the final blow to confirm the existence of these mind-bending bodies found in space.

The sheer determination and will power of these scientists is truly inspiring, and this year will send their names through history. They have dedicated their whole lives in pursuit of the answer to this one question, and it is safe to say that all their blood, sweat and tears has paid off. I think, that the most important think we should be taking from this year’s Noble Prize Winners, is to never give up on something you believe in. These people are living proof that have shown that no matter who you are, if you set you mind to something, with hard work and perseverance, you can achieve anything. And nothing can stop that.

Is ‘Spiderman: Into The SpiderVerse’ Accurate About a Multiverse?

What is reality? We believe that the universe we exist in now, including the galaxies, stars, planets, is our reality and there isn’t more to it. There is only one reality, and that is the one that we all live in. That’s the preconceived assumption, right?

The 2018 Academy Award Movie ‘Spiderman: Into the Spider-verse’ explores a scenario where there are parallel realities, through which Spider-Man meets alternate versions of himself after an accident at a particle collider. This whole concept must sound like a fantasy to you, just the regular twist made by the director to make the movie interesting and conceptual.

Download the 'Spider-Man: Into The Spider-Verse' Screenplay
Poster for the movie ‘Spider-Man: Into the Spider Verse || At Courtesy Of https://nofilmschool.com/Into-The-Spiderverse-Screenplay

But, what if I told you that there actually are multiple realities, that exist simultaneously? That they had universes similar to ours, and environments similar to the ones we are familiar with?

What if I told you, that there was truly another version of you living in an alternate reality. For all we know, what if that other ‘you’ is reading this article right now?

I’m sure most of you are puzzling over the previous sentence. How is that even possible? How can there be another ‘me’ when we are all supposed to be ‘unique’? It is sounds very possible in a fictional movie, but how can there be anything like that in real life? Was Spider-Verse hinting at a possible multiverse affecting our everyday lives, and we know nothing about it?

If we want to apply this to the world we live in now, we are going to have to get into the science behind the multiverse, with a pinch of philosophy too. If you are up for the challenge, and if you are curious to know more, go on and read ahead.

Let’s start off with the basics. Physics is divided into classical physics and quantum physics. Classical physics is basically the branch of physics that explains most of the events we see in our everyday lives, like the trajectory of a ball or the rotation of planets. All of these things are easy to comprehend, as we are used to the classical physics aspect of the world. On the other hand, quantum physics deals with more bizarre phenomenon. Like really, really bizarre.

Both classical and quantum systems evolve in a deterministic manner, and you can predict how that system will behave if you apply the respective equations with the initial conditions. The behaviour and evolution of a system in classical mechanics is more graceful, it follows the equations describing it easily.

A quantum system, like an electron has a wave-function. The wave-function is quite the joker, because when you are not looking at it, it obeys an equation known as the Schrödinger wave equation, using which we can predict how the wave-function will evolve as time passes. The catch is that, there is a whole different set of rules when you look at it. When you observe the quantum system, the wave-function collapses to a single point in space. This is known as the measurement problem. This concept will make more sense when I explain superposition, but just keep it in mind anyways.

Visual Representation of the Wave-Function Collapsing || At courtesy of http://afriedman.org/AndysWebPage/BSJ/CopenhagenManyWorlds.html

The other two phenomena that make up an important part of the multiverse theory, are superposition and entanglement. Superposition, as you already know, occurs when a quantum system exists in two states at the same time. Simple.

Entanglement is when two particles interact physically, resulting in both particles to be described by the same wave function. This essentially means that action performed on one particle affects the other. So, measuring one of them affects the state of the other and that single wave-function, describing both particles, collapses.

Alright, now you pretty much know the basics. Now it’s time to put together the pieces of the puzzle. And to do that, we will be using an infamous thought experiment known as Schrödinger’s cat to understand how there can be multiple realities on a small scale. Schrödinger’s cat was a though experiment proposed by physicist Erwin Schrödinger, to show just how weird quantum mechanics, and also more importantly, to show the possibility of the Many-Worlds Theory.

The scenario is this – there is a closed box containing a radioactive atom, a radiation detector, and bottle of poison and amidst this, a cat is placed in the box too. If the radioactive atom does decay, the detector will detect the radiation causing the poison to be released, and so the cat dies (morbid thought I agree). But, if the atom does not decay, detector won’t detect radiation, so the poison won’t be released, meaning that the cat stays alive. We don’t know whether the atom is going to decay or not decay, so we say that the atom is in a superposition of decayed and not decayed. The state of the radiation detector (detected or not detected radiation) and the state of the cat (dead or alive) is directly tied to the state of the atom. Basically the fate of the cat is directly tied to the state of the radiation detector, which in turn depends on the state of the radioactive atom. As established before, since the radioactive atom is in superposition, it will result in the radiation detector and the cat being in superpositions as well. So the whole system inside that box is in a quantum superposition and is described by a wave-function. Only when you open the box, making a measurement, the wave-function collapses (meaning there is no superposition anymore) and the cat is either dead or alive.

Schrödinger's cat - Wikipedia
Depiction of the superposition of the cat, the poison, and the radiation detector || At Courtesy of https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat

This way of looking at the situation is known as the ‘Copenhagen Interpretation’, which says when there is a superposition, and you look at it, the wave-function collapses and you only see one of the two possibilities that were in superposition. The End.

But the Many-World Interpretation (MWI), arrises from this measurement problem – where did the other possibility go? In retaliation to the Copenhagen Interpretation, MWI states two points.

Firstly, you as a human, are made of atoms, and atoms are made of other quantum particles. Therefore, you yourself are a quantum system , so you would definitely obey the laws of quantum mechanics, meaning that you, just like the cat, could undergo superposition. Then why would you treat yourself as a classical system when you are making a measurement? Strong point, right?

Secondly, for a second, I want you, to forget that you are a person with conscience. Just picture yourself as a physical system obeying quantum physics and other physical laws. The moment you open the box and look at the cat, no measurement is made and we just ‘interact’ with the system and get entangled with the state of everything inside the box. This means we would also be in a superposition, and we see the cat both alive and dead, but the version of you that saw it alive, and the version of you that saw it dead inhabit two different worlds, and those two copies of you exist in their own realities and never interact. I will briefly explain how this is possible.

Many-worlds interpretation - Wikipedia
Depiction of the wave function branching into two parallel universes where the cat is both dead and alive in different realities || At Courtesy of https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat

If a quantum object in superposition gets entangled with the environment, which consists of air molecules and photons (quantum version of the ‘environment’), it is said to undergo environmental decoherence, causing the wave-function to branch. So in the case of Schrödinger’s cat, the detector becomes entangled with the superposition of the atom, but the detector is being bombarded by air molecules and photons, which would bounce off differently if there was radiation. This means that the detector has got entangled with the environment’s state (due to this interaction), causing decoherence and thus the wave-function to branch. So the moment, you open the box, two copies of you are created in different worlds.

This means that the wave-function is branching all around us, and every outcome happens 100% of the time, but we don’t experience that because we are just a sliver of a huge multiverse. You just need to keep in mind that this only happens if there is environmental decoherence.

Parallel Worlds Probably Exist. Here's Why - YouTube
Many-Worlds Interpretation Depiction || At Courtesy of https://www.youtube.com/watch?v=kTXTPe3wahc

Now that you have understood the science behind the Many-World Interpretation, and the conditions for parallel realities to exist, let’s look at the ‘Spider-Man interpretation’ of the multiverse. This is my analysis of the movie based on my knowledge, so I might not be 100% correct.

So in the movie, after the Miles Morales version of Spiderman gets bitten by the radioactive spider, his cells actually begin to undergo radioactive decay or cellular decay. This situation is very similar to the radioactive atom in Schrödinger’s cat. The average 70 kg human has around 7 × 1027 atoms in his/her body, which undergo radioactive decay at the rate of 5000 times per second. All these atomic nuclei are constantly in superposition of decayed and not decayed, so when they do decay, it means those nuclei get entangled with the environment, and the wave-function branches out, creating more and more copies of you at a tremendous rate. If this applies to an average human being, it can definitely apply to Miles Morales, whose decay rate is probably accelerated thanks to the radioactive spider.

Spider-Man Into the Spider-Verse: Every Spider-Man Character, Explained -  Thrillist
All The Spider-Men from different dimensions || At Courtesy of https://www.thrillist.com/entertainment/nation/spider-man-into-the-spider-verse-spider-man-characters

Just like how the two copies of seeing the cat dead and seeing the cat alive could exist in their own individual realities, that can never interact, the many copies of Miles Morales cannot interact. But my main point here was, even though there are so many versions of Spider Man from different dimensions/realities (Gwen Stacy, Peter Parker etc) it is actually impossible to for any of them to experience anything in another reality as each reality is unaffected by the other. But when a particle accelerator comes into play, and then different Spider-Men meet through it, these kind of stuff could be possible. Particle accelerators are extremely huge machines which use electromagnetic fields to propel charged particles at very high speeds, and it is mainly used for research in particle physics. The very famous Large Hadron Collider at CERN, was conducting research by thinking of creating tiny black holes. It hasn’t been done yet due to safety risks and other reasons, such as the if the black hole is quite small, then it will evaporate due to Hawking’s Radiation (not to mention if it goes wrong, our world could be gobbled up by it). But speculations have come to show that if there were tiny black holes being created, it could be a gate way to different dimensions and parallel realities. I am not an expert on the subject, so I wont get into the details of it. This is all under speculation, and since it borders on philosophy as well, we cannot be sure. But if it is true, then who knows, maybe the events in Spider-Verse could be, maybe, possible with a particle accelerator (even though they don’t mention anything about Black Holes in the movie). There probably other things that don’t make sense, but these were the only ones I could think of under my circumstances.

Of course, Many-World Interpretation is still a huge debate in the science community and it hasn’t been proven yet. Some physicists think it’s too complicated, while others are infatuated by the idea of it. This does mean that the whole idea of ‘The Multiverse’ is not proved to exist, but it could be possible. Even if it is a low probability world, there could be a copy of you being president, or winning the Wimbledon, all in different realities. Even if it is a topic of uncertainty, it is perhaps fun to ponder over how our universe may work if there was a multiverse.

————————————————————————————————–

The Future of Computers

Quantum computing is the new future of information technology. Mighty and possibly revolutionary, these computers will be a force to reckon with within a few decades down the line. Google’s quantum computer, called ‘Sycamore’, was able to solve a specific problem in a 200 seconds, while estimating a powerful supercomputer would take a whopping 10,000 years to perform the same task. Quantum computing is holding high promise of becoming the ‘Belle of the Ball’, taking digital computation and problem-solving capacities to a level we never would have thought possible.

Google’s Quantum Computer

All right. Enough with the majestic introduction. You probably must be thinking of something along the lines of: “Why does she just add ‘quantum’ in front of every word in the dictionary?” or “Does this mean I can finish my maths homework quicker, with this computer?”. Unfortunately, I don’t really have the best answers for those kind of queries. But, I can give you an idea as to what these kind of computers can do, how they work, and the promises they hold for the world in the future.

The computer we use in our day to day lives is a very average computer. It functions based on the binary system, using ‘bits’. Bits can exist as either 0’s or 1’s. There is a lot more certainty in regular bits, as we are very sure of the state of the bit (either 0 or 1).

However quantum computers have a weird unit of information. They use qubits (quantum bits). Other than the fact that I have yet again added the word ‘quantum’ in front of a regular dictionary word, there is something else that makes it different. The qubits can exist as a 0, and as a 1, at the same time. This phenomena is known as superposition, probably the most important concept in quantum computing too. In quantum physics, a particle, such as an electron, can exist in two different states or places at the same time. However, the catch is that, if a measurement is made on the particle, the wave function will collapse (it will return to a single state/place), and there will be no more superposition.

The importance of the to maintain superposition and prevent any measurements/interactions to reduce error rates

Because of this fragile nature of superposition, the qubits can’t interact with any other particle (which is technically what I meant by ‘measurement’ in the previous paragraph). If it does get disturbed, then the qubit, that was once existing as both 0 and 1 simultaneously, will return to either a 0 or a 1, just like a classic bit from a classical system. As you can see here, we cannot exactly tell which state (0 or 1) it will become if the wave-function collapses. This is because quantum computing, just like quantum physics, is purely based on probabilities. That is what made physicists skeptical about quantum physics in the first place – the lack of certainty. Yes, the risks of loosing the superposition are there. If the qubit interacts with something, it will collapse to either 0 or 1, making the quantum computer not so ‘quantum’. But if there is no disturbance, the system will evolve in a deterministic way and maintain its superposition. The ability to remain quantum over classical is known as quantum coherence.

——————-Interlude for the Mathematics (can skip if wanted)———————-

Interlude: For the mathematically inclined reader, we represent the superposition of 0 and 1 in the form of 𝛂 |0⟩ + β|1⟩, representing the two probabilities or the two states in superposition. If and when a qubit undergoes a measurement, there are 2 possible results with equal chances:

  1. Result can be ‘0’ with a probability of 𝛂 2
  2. Result can be ‘1’ with a probability of β2

We get squares of alpha and beta because mathematically, you have to square the wave-function (Ѱ), in order to obtain the probability.

Visual Representation of the Mathematics above

So, now you know how important it is not to make a measurement in superposition, and how it affects the qubit. Let’s discuss a bit about how we ensure that there is no disturbance in the quantum computer. So far, the best developed method of ensuring this is by using superconductors.

A superconductor is basically a special type of material through which a charge can move, without resistance, thereby it does no loose any energy. For example, in electricity cables, there is always some electrical energy lost to the surroundings in the form of thermal energy. This is because of resistance. So, if you have no resistance, then there is no energy lost, hence you get 100% efficiency. But, what does this mean for the qubits? Well qubits are made out of superconductive material such as aluminium, which makes sure there is no resistance. When the qubit moves without resistance, it means that these qubits won’t interact with anything in it’s surroundings = no ‘measurement’ made! Are you able to make the connections?

Here is where things get really sciencey and more detailed (you have been warned)

These qubits are in solid state, and they are made of superconductors (I hope that you understood that). These superconductors will prevent electrons from (a) interacting with each other and (b) interacting with other “degrees of freedom” which are basically other particles in the material such as phonons. These superconductors do this by condensing these electrons into Cooper Pairs. Cooper pairs are pairs loosely bound electrons, which have same speed, but have opposite spin and try to move in opposite directions. These cooper pairs will condense again and form something called an electron superfluid. Superfluids in general, move with no resistance or interactions.

So one mission accomplished – the electrons are far away enough to not interact with each other. Secondly, as there is not enough energy available to break the cooper pair and free the electrons, there will not be any interaction between electrons and the “other degrees of freedom” / other particles present. So second mission is accomplished as well! We can drive the qubit using the electron superfluid, without breaking the Cooper pair and without jeopardising the quantum coherence!

The quantum computer is stored at temperatures near absolute zero (0 Kelvin/-273℃), which is as cold as the vacuum in space. A mixture of two helium isotopes are used. There are a lot more micro processes as well, to eliminate any other sources of error, but I won’t get into to the details.

There are a lot more things going on in quantum computers, then what I have explained. The wave-function of qubits are divided into two components – amplitude(which affects number of Cooper Pairs), and phase value (which affects something called the super-current). Since these are conjugate variables (i.e. they are both related by Heisenberg’s Uncertainty Principle). As I have explained in my previous blogs, this means that these two aspects cannot be measured at the same time, as (yes again) the wave function collapses. So two more qubits under superconductive qubits have been made – charge qubit for the amplitude factor and flux qubit for the phase value factor. If I go beyond this people are going to start fainting, including me.

Quantum computing mainly comes into play when we are faced with large and more complex problems, which regular computers, even supercomputers, don’t have the power to solve. Although quantum computers are mostly used by military for cryptography, physicists are trying to find ways in which to bring these computers to the masses. Currently the only quantum computers are owned by IBM, Google, D-Wave Systems, Toshiba, and a handful of other companies have quantum computers. There is still a long way to go, but the first step is often the most important leap to innovation in the 21st century.

IBM’s quantum computer

So there you have it! A brief idea of quantum computing! This is just the tip of the ice-berg, there are so so many more things that play pivotal roles in quantum computing. This is the future of technology. Some say it may even be more handy and powerful than artificial intelligence. Who knows? There is always an uncertainty in science, and always scope for improvements. As is there uncertainty in these turbulent times. Seems a lot like quantum physics has predicted the uncertain probability of our future hasn’t it?

Trust me, this is just the beginning.

The God Particle

In our everyday life, we see and experience many things. For example, you can feel the gadget you are holding now, the table or chairs in your living room, your coffee mug, so on so forth. We know, from high school , that every one of those objects is made of atoms, and the sum of the billions of atoms (made of protons, neutrons and electrons) in that, let’s say, table, account for the overall mass of the table. I hope I didn’t confuse you already.

But what if I told you, that the atom, or to be more specific, the subatomic particles that make up an atom actually had, zero mass. Zip. Nada. Nil. Zero. Those protons, neutrons or electrons were massless particles. Then how can any object possibly have any weight?!

Since I may have just obliterated everything you seem to think you know about the framework of reality, I really should make it up to you by telling you what is really going on. One may console himself by saying, “It’s all God, I’m telling you.” but according to downright science it’s the god particle, also known as the Higgs Boson, that gives any particle its mass. I will be referring to this particle with the latter name, since the other name strikes many religious beliefs.

Before we go on to understand what is the Higgs Boson (funny name, I agree), you are going to have to completely discard what you think you know about the universe.

We all have been taught that electrons, protons, neutrons, photons etc are particles. But this is not true. They are all fields. Difficult to understand right? Electrons, for example, are simply vibrations in the electron field that is in the universe. Protons, made of quarks, are vibrations in the quark field that is in the universe. If you still can’t understand, just imagine a still ocean and suddenly a wave passes over it. The ocean refers to the field and the wave refers to the particle of that field. The world is actually made of fields, but our limited observations cannot perceive that, so we just see it as particles instead. Particles have a location, but fields fill space.

Vibrations of fields, followed by a depiction of stacks of field existing simultaneously in the universe (at courtesy of PBA Space Time)

Alright, now that that’s settled, I am gonna introduce you to 4 different forces that you can detect in a nucleus. There are long-range forces, such as gravity and electromagnetism, which have effects that can be perceived at infinite distance (i.e. they act over long distances). The other are short-range forces, which are the strong and weak nuclear forces, which have effects that are limited or constrained (opposite of long-range forces). To give you more context about these forces, electromagnetism acts between electrons and nucleus (binding the atom), gravity holds all atoms together, strong nuclear force binds the quarks in the nucleus together (the whole up and down circles you see in the picture are quarks), and weak nuclear force binds the proton and neutron together.

Diagram of the forces found in an atom and their respective particles (at courtesy of Sean Carroll Royal Institute)

The first question that pops into your head is probably, why can’t weak and strong nuclear force be long range forces? What is stopping them?

Now each of these forces have a particle that carries its force. Like, gluons carry strong nuclear force, electromagnetism is carried by photons, weak nuclear force is carried by W and Z bosons (bosons are just particles which carry either force or energy) and gravitational force is carried by gravitons (this last one isn’t fully proved yet).

So you now know the carriers of these four fundemental forces. Now let’s get back to the question. The photons and gravitons, carriers of the long-distance forces, are massless, and so move at the speed of light and are not affected by time (i.e. they do not evolve with time by changing their spin so they maintain the same spin). The gluons and W and Z bosons, the weak nuclear forces, are not massless, rather they are quite large and continuously change their spin, evolving with time, hence are affected by time.

You may just think that this is the way things are, nothing much here. But don’t get complacent. Think a bit further. Question it. If I told you all particles are actually massless, I would puzzle you again right? That is the truth. So what is changing the masses of those carriers? Why do some remain massless, and why do other gain mass? Is there some sort of magic happening? Or is it just an act of God?

*suspenseful music plays*

It’s the Higgs Boson

I don’t want to tire you out too soon. So the magic of the Higgs Boson and its field will be explored in my next blog post. Stay tuned for that. That’s where all the real fun is!

Just remember that you and I would be pure vacuum, nothing at all, with no mass, purely meaningless, without the Higgs field.

Without it, life as we know it, and the universe as we see it, would not exist at all.

Design a site like this with WordPress.com
Get started