Showing posts with label Nobel Laureates. Show all posts
Showing posts with label Nobel Laureates. Show all posts

November 13, 2024

Why Some Countries are Rich while Others are Poor?

Traditionally, economic models emphasize that economic growth rests on the accumulation of factors of production, namely, labor, capital, and technology that enhance productivity and efficiency. In other words, the greater the capital stock per worker and the more productive and efficient its use, the richer a country would be. The obvious question that it raises is: Why did some countries accumulate more of these factors of production and grow richer than others?

This year’s winners of the Sveriges Riksbank Prize in Economic Sciences, awarded in memory of Alfred Nobel, “for studies of how institutions are formed and affect prosperity”—Daron Acemoglu, Professor at MIT, Cambridge, US; Simon Johnson, Professor at MIT, Cambridge, US, and James A Robinson Professor at University of Chicago, US—argue that accumulation of factors of production in a country depends on the quality of its institutions/government.  

In 2001, these three Laureates published a paper, “The Colonial Origins of Comparative Development: An Empirical Investigation”, that became one of the most cited papers in economics. This paper examined the long-lasting effects of colonialism on the economic development of countries. Employing a range of empirical methods, including regression analysis, and analysing the colonial experiences of various countries and their current economic performance using historical data, the authors have shown a clear correlation between the type of institutional patterns established during colonialism and present-day economic outcomes. 

According to them, colonizers had established two kinds of institutions in the colonies: One, extractive institutions, and the other inclusive institutions. Extractive institutions are designed to exploit the resources and labor of a colony such as Congo, etc., for the benefit of the small elite, i.e., the colonizers. Such institutions helped colonizers retain control and thereby enjoy short-term gains. On the other hand, inclusive institutions promoted broad participation including locals in the economic process and provided secure property rights, which ultimately fostered investment and innovation. This kind of institutional pattern had a lasting impact on the economic prosperity of the colonies such as North America, Canada, Australia, and New Zealand, leading to flourishing economic growth. 

This phenomenon of establishing different institutions in different colonies gave the Laureates a “natural experiment” to analyze and unearth the underlying criteria for the creation of different institutions in different colonies. The authors hypothesized that the said colonization strategy was in part determined by the feasibility of European settlement in the colonies. In support of this argument, they, cleverly using the historical data about the mortality rates of settlers in different colonies, inferred that those countries with low mortality rates had become attractive for colonizers to stay for long, and as a result, they built inclusive institutions allowing the colonized to share in the wealth produced through private property and free markets. 

Contrarily, in colonies such as in Africa and South America, where mortality rates of Europeans were high, colonizers tended to develop extractive institutions, for they had less incentive to settle there for long and build lasting governance structures. They finally concluded that these institutions by virtue of their persistence to the present continue to impact their economic performance. Their results also indicate that “reducing expropriation risk would result in significant gains in income per capita but do not point out what concrete steps would lead to an improvement in these institutions”. 

Some economists, however, argue that the data for the key variable, namely, “settler mortality” on which their very argument rests is flawed and selectively chosen to flatter their hypothesis and hence their empirical findings are not robust. They also commented that their style of inferring causation is debatable, for it cannot distinguish between the places that the colonizers went to and the human capital they brought along with them.  

Nevertheless, this paper was found to have greatly influenced several disciplines such as economic development, political science, economic history, institutional studies, etc. It also highlights the importance of historical specificity in understanding economic growth, and in that sense, it moved development economics away from traditional growth models.

Interestingly, way back in 1997, Jared Diamond argued in his trans-disciplinary non-fiction book Guns, Germes, and Steel: The Fates of Human Societies that it was the geography of European countries that led to early economic growth. Moving a bit beyond Diamond’s assertion, current year’s Nobel Laureates in economics, argued that it was the kind of institutions built by the colonizers in colonies—defined by mortality rates linked to the region’s geography—that ultimately influenced economic growth.

Indeed, two of the Laurates, Acemoglu and Robinson, taking the chain of causation one step further, argued in their best-selling book, Why Nations Fail, that political institutions as determinants of economic institutions led to long-run development. This argument, however, fails to explain how China with that kind of political institutions could grow economically so well, while India despite its democratic setup could not grow that well.

Nonetheless, the Nobel Laureates’ exploration to find an answer to the question as to why some countries are rich and others poor, has certainly laid a foundation for later generations of economists to build fresh knowledge over it. That aside, their paper also suggests strengthening property rights, encouraging political inclusivity, and promoting good governance for fostering economic development in post-colonial countries. Simply put, the Nobel Laurates argue for a genuine commitment of governments of the erstwhile colonies to build institutions that promote inclusive economic growth.   

**

October 27, 2024

Nobel Prize in Physics 2024

On Tuesday, October 8th, the Royal Swedish Academy of Sciences awarded the Nobel Prize in Physics 2024 to Dr John J Hopfield of Princeton University, USA, and Dr Geoffrey E Hinton of the University of Toronto, Canada “for foundational discoveries and inventions that enable machine learning with artificial neural networks”.

Ellen Moons, the chair of the Nobel Committee for Physics and physicist at Karlstad University lauded the work of these two laureates which used “fundamental concepts from statistical physics to design artificial neural networks” that can “find patterns in large data sets”.

This announcement however stirred debate across academia. Their argument is: How could research associated with Machine learning and Artificial Intelligence, which come under Computer Sciences/ mathematics be awarded under Physics?

Another section however argues that Dr Hopfield, having developed a type of artificial network—Hopfield network— which behaves like a physical structure called a spin glass, appears to have offered a tenuous reason to the Academy to reward their research under Physics.

But Noah Giansiracusa, an associate maths professor at Bentley University said: “… Even if there's inspiration from physics, they're not developing a new theory in physics or solving a longstanding problem in physics." Thus, many opine that though their work deserves recognition, the lack of a Nobel Prize for mathematics /computer science has distorted the outcome.

That said, let us move on to learn how these two laureates have made computers that cannot of course think, to mimic functions such as memory learning. They created a technology—Artificial Neural Networks (ANN) and deep learning— that became the intellectual foundation for AI systems.

ANNs are networks of neurons (or processing centres) designed to function as a system similar to the human brain in learning and processing information. The foundations of ANN rest in various branches of science such as statistical physics, neurobiology, cognitive psychology, etc.

ANNs are computer programs designed to act like our brains. They loosely model the way biological networks of nerve cells called neurons connected by axons are believed to work. The basic units of the brain called neurons have limited intelligence. Each neuron has a number of input wires called dendrites through which it receives inputs from other locations. A neuron also has an output wire called an axon through which it sends signals to other neurons.

In short, a neuron is a computational unit that gets a number of inputs through dendrites, does some computation, and then sends its output via axon to other neurons. Such billions of densely connected neurons, spread over many layers in the brain have tremendous processing power and thus can solve complex problems.  

Led by this understanding, ANNs are created with many simple machines—nodes/neurons— distributed over many layers: an input layer, an output layer, and other hidden layers. These machines are connected through unidirectional links which can carry electric current. Each machine/node performs a simple processing based on its inputs. If the result exceeds a threshold value, it gets activated, just like a neuron in the brain fires. The activated node transmits an electrical impulse to the next machine, which may or may not be activated. We thus ultimately get a pattern of 1s and 0s and based on this pattern of 1s and 0s in the input and output layers, one can train the network to respond in a particular way.    

In 1982 Dr Hopefield, a physicist, who later became a professor of molecular biology at Princeton University, inspired by associative memory in the brain, wanted to build a network with 100 neurons but that was beyond the capacity of the then-prevailing computing capabilities. Hence, he finally settled on a network consisting of 30 neurons and demonstrated the idea of machine learning— a system by which a computer can learn from examples of data instead of learning from a programmer.

Hopfield’s network consists of interconnected neurons or nodes. It is a single-layered and recurrent network. It has binary threshold nodes, with the states +1 and -1 (or 1 and 0) respectively. Each neuron stands for a component of memory. Each neuron is connected to every other neuron except diagonally.

The network has an energy function. It has the ability to transform itself through transitions to different states until it stabilizes. When the network reaches a stable state, which corresponds to a stored pattern, its energy is decreased. This is the key to associative memory. It thus retrieves a stored pattern even when presented with incomplete or noisy versions of the patterns.  

It is the simplest mathematical model with built-in feedback loops. Hence, the Hopfield network is supposed to work similarly to our brain. But Hopfield’s network by virtue of having a limited number of neurons, had very limited memory storage.

Using the Hopfield network as the foundation, Geoffrey Hinton— often referred to as the “Godfather of AI”— came up with a new network using a different method: the Boltzman machine. His network consists of nodes (neurons) organized in layers: an input layer, hidden layers, and an output layer. Each neuron receives input data, processes it, and passes it on to the next layer.

Relying on the energy dynamics of statistical physics, Hinton showed that his generative computer model could learn to store data over time by training it using examples of things that it should remember. It can thus be used to classify images or create new examples of the patterns on which it was trained.

Hinton—along with psychologist David Rumelhart and computer scientist Ronald J. Williams—also developed the backpropagation algorithm. It helps in minimizing errors in the output. It is this pioneering research of Hinton that worked as the “fundamental to give ‘depth’ to deep learning.

The two Laureates’ research has thus transformed all areas of AI, from image processing and data science to natural language processing, with advances that are already impacting our daily lives. Indeed, the rapid progress in AI and its applications is causing anxiety, anxiety even to its grandfather, Hinton. But the better way to handle this anxiety is for businesses to propel forward.

**

May 10, 2024

Daniel Kahneman : The Grandfather of Behavioral Economics

Daniel Kahneman, a Nobel Laureate in economics, whose pioneering behavioral science research changed our understanding of how people think and make decisions, died at the age of 90 on March 27. 

 **



Daniel Kahneman—Eugene Higgins Professor of Psychology Emeritus at Princeton University and Professor of Psychology and Public Affairs Emeritus at Princeton’s Woodrow Wilson School of Public and International Affairs—collaborating with his colleague and friend of nearly 30 years, late Amos Tversky, applied cognitive psychology to economic analysis and thereby built a “bridge between the economic and psychological analyses of individual decision making” under uncertainty paving the way for “the new and rapidly expanding field of behavioral economics”.

Traditionally, economists assumed that each person makes rational choices in pursuit of his/her self-interest. Based on this assumption, they came up with ‘rational choice theory’ which states that individuals use rational calculations to make choices and achieve outcomes that are in alignment with their personal objectives. Accordingly, they built elaborate theoretical and mathematical models to explain how markets work to efficiently allocate capital and set prices.

As against this well-established theory, Daniel Kahneman and Amos Tversky demonstrated in the ’70s and early ’80s that often individuals make illogical choices that sabotage their economic interests, that too, believing that they are rational. They argued that humans faced with uncertain situations tend to make judgments and make decisions based on systematic biases. According to them, people, the economic agents, rely as much on flimsy grounds as on solid evidence of the likely outcomes. They are said to be guided less by probabilities and more by how closely a situation represents their preconceived ideas. They care more about changes rather than absolute levels of change and care more about losses than they do about equal-sized gains. They even have a propensity to stick with the status quo. 

Incorporating several such patterns, they came up with the “Prospect Theory” in a paper published in Econometrica in 1979 explaining mathematically how people make choices in the face of risk and uncertainty. According to it, individuals “underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty”. This tendency called the ‘certainty effect’ leads to risk aversion in choices involving sure gains and to risk-seeking in choices involving sure losses. It also states that people generally discard components that are shared by all prospects under consideration and instead focus on the components that distinguish them. This kind of behavior, called ‘isolation effect’, is likely to result in inconsistent preferences when the same choice is presented in different forms. They have also proposed an alternative theory of choice in which utility is based on changes in wealth rather than the absolute states of wealth and in the process individuals replace probabilities by decision weights.

There is another important element of the prospect theory: the key concept of ‘reference point’, which is nothing but the starting point from which individuals make decisions about gains and losses. It can be either an actual or an imaginary starting point. Kahneman and Tversky state that the reference point is determined by several factors such as: past experiences, current circumstances, cultural norms, individual preferences, etc. According to them, a reference point is not always static.

The prospect theory model looks like an S-curve, which is normally concave for gains in the first quadrant and convex in the third quadrant representing loss. The slope of the loss function is generally steeper than that of the gain function, for people are known to assign a higher value per unit of loss than they do for the unit of gain. It otherwise means that people are more upset about losing something than they are happy about gaining the identical amount. This asymmetry explains the ‘loss aversion’ of people. The prospect theory thus challenges the very basic tenets of utility theory that was fundamental to economics.

The prospect theory also suggests that people make decisions in two stages: an early phase of editing stage in which individuals simplify complex situations by ignoring some information and by using mental shortcuts (heuristics) and at the later evaluation stage edited prospects are evaluated and the prospect of highest value is chosen. According to the prospect theory, people will be more risk-averse when the stakes are high and more risk-seeking when the stakes are low.

Kahneman popularized their research relating to the prospect theory through his book, Thinking Fast and Slow, which was published in 2011. In it, he demonstrated to what extent our ability to make decisions is influenced by subconscious quirks and mental shortcuts that can ultimately distort our thoughts, of course, in predictable but irrational ways. He further explored how the brain processes information and makes decisions using two systems: System 1, which is the domain of intuitive responses that is riddled with human biases; and System 2 which is slower, analytical and deliberate where we consciously collect evidence, sift through facts, evaluate them and then take a decision.

Daniel Kahneman’s Thinking Fast and Slow, which appeared on best-seller lists in science and business that made him popular in the public domain, was placed in the same league as Adam Smith’s The Wealth of Nations and Sigmund Freud’s The Interpretation of Dreams by Nassim Nicholas, the mathematical statistician and former option trader and author of The Black Swan.

The prospect theory of Kahneman and Tversky later caught the attention of mainstream economists who incorporated their insights into economic modeling. Indeed, psychological biases were documented and used to explain various economic topics such as consumer behavior, labor markets, financial markets, etc., in the ensuing decades. Notable among them is Richard Thaler who won the Nobel Prize for his work in the field of behavioral economics. As Kahneman and Tversky have categorically stated in their paper in Econometrica, prospect theory, though explicitly concerned about monetary outcomes, can as well be used to assess choices that involve other attributes such as quality of life, or the impact of policy decisions, etc.

After working on biases and how they lead to errors in judgment for almost 50 years of his career, Kahneman, having encountered another type of error called ‘Noise’, worked on it and coauthored the book, Noise: A Flaw in Human Judgment along with Olivier Sibony and Cass R Sunstein (Hachette Book Group, May 2021). The concept of noise and its adverse impact on judgments is, of course, not as familiar as the impact of psychological biases on judgments. Defining noise as the “unwanted variability in professional judgments”, authors stress that the word, ‘unwanted’ in the definition is more important. For, variability in certain judgments may not be a problem while in certain cases it is even desirable—of course, but certainly not in professional judgments. For instance, if two doctors give two different diagnoses, at least one of them must be wrong. That is where variability in judgment is not permissible. Differentiating noise from bias, Kahneman once said: “Put simply, bias is the average error in judgments ...errors in those judgments all follow in the same direction, i.e., bias. By contrast, noise is the variability of error...errors in those judgments follow in many directions, that is noise”.

Noise is found within the system in which judgments are made. In one of his lectures, Kahneman said, “Whenever there is judgment there is noise and probably more than you think”. To quote him, the most striking example of noise distorting judgments is: Performance reviews. His research had shown that only one quarter of rating rests on actual performance while the rest three quarters is related to noise. It can be: “level noise”—some raters are on average more generous than others; “Occasion noise”—the rater may be in a better disposition today than on other days; or, it could be idiosyncratic response of a rater to a ratee. Thus when all these things are put together, three quarters of a performance rating is turned out to be based on pure noise. Citing the example of underwriting in insurance sector, he said that it is understandable that there would be divergence in the premium quoted for the same policy by each underwriter, but the question is: “How much divergence?” He said that when the average was computed from the study undertaken it stood at a startling 56%. Now what is more alarming is the question Kahneman raised: “How can it be that people have that amount of noise in judgment and not be aware of it?”

Noise can also be there within an individual. For instance, when a problem is presented twice to the same person, we notice that he/she, not realizing that it is the same problem, gives different answers. Driven by their research, the authors assert that wherever there is judgment, there is noise and probably more of it than one thinks. Hence, Kahneman suggests for a “noise audit” followed by practicing “decision hygiene” as a means to reduce noise in the organizations. Their book indeed suggests a set of specific procedures for reducing the noise. One of the important suggestion that Kahneman made for better judgments to happen is: “Don’t trust people, trust algorithms”. For, algorithms are rule-based. He further said, “Train people in a way of thinking and in a way of approaching problems that will impose uniformity”.

Kahneman never took an economics course. But his path-breaking research along with his friend Tversky which revealed how hard-wired the human brains are with mental biases that warp judgments of people, had simply transformed the fields of economics and investment theories. This lifelong research of Kahneman had fetched him Nobel Prize in economic science in 2021. To quote Harvard psychologist and author, Steven Pinker, his work would remain “monumental in the history of thought”. In an interview in 2021, this mighty thinker wondered how “linear people” would adjust to the quickly advancing, nay exponentially advancing artificial intelligence. That was his insatiable interest in how others think!

**

 

 

 

 

 

October 07, 2022

Congratulations to the Winners of 2022 Nobel Prize in Physics

The Nobel Prize season has arrived amidst crises─ war in Ukraine, disruptions in energy and food supplies, the fallout from the covid-19 pandemic, the climate crisis, and whatnot─ yet, the world’s most prestigious prize commands the attention of the whole world, and rightfully. As the dates neared, all the eyes turned towards the Royal Swedish Academy of Sciences for their announcements.   

The first two announcements covering the fields of medicine and physics have indeed pleased many. Let me first take you around the physics prize, for that metaphysical- phenomenon-like quantum entanglement and its resolution by the Laurates is pretty interesting to know.

First thing first: this year’s Nobel Prize for physics has been awarded to the trio:

Alain Aspect from Universite Paris-Saclay in France,

John Clauser from JF Clauser & Associates in the US, and

Anton Zeilinger from the University of Vienna, Austria

“for [their ground-breaking] experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”, who shall share the prize money of 10 mn Swedish Kronor (US $915000) equally.

Next, comes the obvious question: what are those experiments carried out by these Laurates and what did they tell us? Before addressing it, let us first take a look at Quantum mechanics and its quirkiness.

Our Physics textbooks of school days told us that by using the equations given in it we can predict exactly how things will behave in the macroscopic world. But in the world of quantum (a state that physicists invented to describe sub-atomic systems), nothing is known for certain. For instance, we never know exactly where an electron in an atom is located. We only know where it might be. Everything in it is a probability. For example, the quantum state of an electron describes all the places one might find it, together with the probabilities of finding it at those places.

Another unique feature of quantum states is that they can be correlated with other quantum states. It means measurement of one state can affect the other quantum. This phenomenon of intimate linkage between two sub-atomic particles that are even separated by billions of light years of space is called: ‘quantum entanglement’. Because of this linkage, a change induced in one will affect the other. Schrodinger, the physicist who first coined the word, ‘quantum entanglement’, said entanglement is the most essential aspect of quantum mechanics.

However, this bizarre, counterintuitive phenomenon of instantaneous entanglement of particles that are even placed on opposite ends of the galaxy, failed to convince Einstein. For, this phenomenon cannot be explained by stating that the particles are mysteriously communicating with each other, since such communication needs to be faster-than-light communication to create an instantaneous effect. But it is simply forbidden by Einstein’s special theory of relativity.  Thus emerged EPR paradox, which Einstein dubbed as “spooky action at a distance”. And, perhaps lead by this paradox, Einstein felt, quantum theory was incomplete. He even believed that elements connecting the variables of one particle to another─ which he named, “local hidden variables” ─will eventually be found.  

In 1964, John Stewart Bell came up with a theoretical test proving that certain quantum correlations, unlike all other correlations in the universe, cannot arise from any local cause. He thus ruled out the existence of any ‘hidden variables’ that Einstein and a few other physicists believed to have a role to play in the phenomenon of quantum entanglement. This breaking of local realism was referred to as “the violation of Bell inequalities”.

It is from here that the work of the present laureates began. All three of them carried out experiments to test Bell’s theorem experimentally to establish that quantum mechanics is complete.

John Cluser, an American theoretical and experimental physicist, along with a UC Berkeley graduate student, Stuart Freedman (who, unfortunately, died in 2012), carried out an experiment for the first time to hunt for the violation of Bell inequalities in 1972. They sent two entangled photons in opposite directions toward fixed polarization filters.  The results obtained clearly violated Bell’s inequality. It thus proved that quantum mechanics cannot be replaced by a theory that uses hidden variables.

There, however, remained a few loopholes after Clauser’s experiment. To rule it out, Alain Aspect, a physicist from the University of Paris-Saclay, came up with a new experimental setup in 1982. He could manage to switch the measurement settings after an entangled pair had left its source. Thereby, he succeeded in proving that the setting that existed when the photons were emitted could not affect the result, i.e., the violation of Bell’s inequality.  He could thus prove that there are no hidden variables dictating the other entangled particle to behave just as the first particle did.  

In 1997 Anton Zeilinger, the third laureate moving a step ahead demonstrated the transference of quantum information from an entangled pair to a third particle. His group also demonstrated the possibility of quantum teleportation, a phenomenon of moving a quantum state from one particle to another at a distance. His work has indeed shown the possibility of linking a series of entangled systems together to build a quantum equivalent of a network.

As Anders Irback, Chair of the Nobel Committee for Physics said, the trio’s work with entangled states thus not only answered fundamental questions about the interpretation of quantum mechanics but also paved the way for a new kind of quantum technology to emerge.

The first application that strikes the mind when you think of quantum entanglement is cryptography. A sender and a receiver can build a secure communication link through entangled particles by generating private keys. These keys can then be used to encode their messages. If someone intercepts the signal and attempts to read the private keys, the entanglement breaks, since measuring an entangled particle changes its state. This enables the sender and the receiver to know that their communication has been compromised. 

Another application that comes to mind is quantum computing. When a large number of entangled particles work in concert, it becomes feasible to solve large, complex problems. A quantum computer with just 10 qubits can exhibit an equivalent amount of memory as 2^10 traditional bits.

Thus, the pathbreaking experiments of the trio opened up a new field of science and technology called Quantum Information Science (QIS) that has applications in computing, communication, sensing and simulation.

**

 

September 01, 2022

Mikhail Gorbachev: The Last President of the USSR

Mikhail Gorbachev, “a visionary who changed his country and the world” by ending the cold war without bloodshed died on Tuesday at the age of 91.



The peasant boy, who claimed himself as “belonging to the so-called children-of-the-war [Nazis] generation”, on becoming the General Secretary of the Soviet Communist Party in 1985 at the young age of 54, set out to make the world less suspicious of communism and importantly conflict-free by avoiding the “traditional, authoritarian, anti-western norm” of his predecessors like Brezhnev.

He was none other than that “exceptional ruler” of Russia and “a world statesman”, called Gorbachev, who chose “glasnost” and “perestroika” not as mere slogans but as the path forward to reform the communist party and give choice to people of the erstwhile Soviet Union to make their society more an open one.

Though came to power as a loyal son of the communist party, Gorbachev, on assuming power, looking at the legacy of seven decades of Communist rule, and the corruption bred by it, the demotivated workforce producing shoddy goods and the soppy distribution system with new eyes, said to Eduard A. Shevardnadze, who later became his trusted foreign minister, “We cannot live this way any longer”.

That said, he lifted restrictions on the media, allowed previously censored books to be published, oversaw an attack on corruption in the upper echelons of the communist party as a result of which hundreds of bureaucrats lost their jobs, and finally moving away from the State policy of atheism, promulgated a freedom-of conscience law guaranteeing the right of the people to “satisfy their spiritual need”.

To simply put, Gorbachev, in the words of the former Russian liberal opposition leader, Grigory Yavlinsky, “gave freedom to hundreds of millions of people in Russia and around it, and also half of Europe”. Indeed, “few leaders in history have had such a decisive influence on their time”.

He gave Russians free elections and a multiparty system and simultaneously created parliamentary institutions for ensuring participatory governance. Yet, the ‘perestroika’ that he started could not reach the destination he wanted—a democratic, human socialism, which he perceived as the very destination in itself rather than as a place on the path to communism. But Russian society, as the liberal economist, Ruslan Grinberg said, perhaps “… don’t know what to do with it” [democracy].

Indeed, in an interview given in 2000, Gorbachev himself commented about this haplessness of the Russian society thus: “It’s not so easy to give up the inheritance we received from Stalinism and neo-Stalinism when people were turned into cogs in the wheel, and those in power made all the decisions for them”.

As the UN Secretary General Guterres wrote in his Twitter tribute, Gorbachev “was a one-of-a-kind statesman”, for he was the leader who “made no attempt to keep himself in office by using force.”

True, as the Secretary General said, “the world has lost a towering global leader, committed multilateralist, and tireless advocate for peace”, which fact well reflects in these extraordinary accomplishments of Gorbachev: one, presided over an arms agreement with the US that eliminated for the first time an entire class of nuclear weapons; two, withdrawn most of the tactical nuclear weapons from Eastern Europe; three, withdrawn Soviet forces from Afghanistan; four, allowed unification of Germany in 1990 peacefully, an event which according to many observers brought the cold war to an end; and five, unlike his predecessors, he, refusing to intervene militarily when the governments were threatened, allowed Eastern Bloc countries to break apart and thus avoided significant bloodshed in Central and Eastern Europe.

Gorbachev, in the words of his biographer, William Taubman, a professor emeritus at Amherst College in Massachusetts, “was a good man—he was a decent man… his tragedy is in a sense that he was too decent for the country he was leading.” That his decency alone was the reason for his accepting the Soviet Union’s dissolution as a fait accompli despite having all the power in the world within him to suppress the revolt. Nevertheless, by announcing his resignation as Soviet Union President in a speech delivered on December 25, 1991 in front of television cameras that was broadcast internationally, thus: “I hereby discontinue my activities at the post of President of the Union of Soviet Socialist Republics…”, he perhaps, averted a possible civil war.

In that speech, presenting his assessment of the path traversed by the USSR since 1986, he said that society “was already suffocating in the grip of the command-bureaucratic system. Doomed to serve ideology and to bear the burden of the arms race… everything had to be changed fundamentally. That is why I have never once regretted that I did not take advantage of the position of General Secretary just to “reign” for a few years. I would have considered that irresponsible and immoral.”

That was Gorbachev, the statesman, who was awarded the Nobel Peace Prize 1990 “for the leading role he played in the radical changes in East-West relations.” Posterity will remember this “social democrat, who believed in equality of opportunity, publicly supported education and medical care, a guaranteed minimum of social welfare, and a “socially oriented market economy” —all within a democratic political framework”, as a leader who brought the cold war to a peaceful end.   

**

 

April 17, 2022

Markowitz’s ‘Portfolio Selection’ Turns 70 ….

Out of nowhere a 25-year-old graduate student from the University of Chicago published a paper titled “Portfolio Selection” in the March 1952 issue of the Journal of Finance putting forward a rigorous mathematical argument for diversification of assets in a portfolio, for there is a difference in the riskiness of an individual stock and that of an entire portfolio. The 14-page long paper that was endowed with intellectual rigor and originality led to its author Harry Markowitz being recognized subsequently as the father of modern financial economics. 

After three decades, he was also honored with the Nobel Prize in Economics in 1990 for his pioneering work on the theory of portfolio choice which, arguing that “the riskiness of the portfolio had to do not only with the riskiness of the individual securities therein, but also to the extent that they moved up and down together”, proposed that a diversified, or optimal portfolio could be created through the mixing of assets that do not move exactly together so as to maximize return and minimize risk. 

Of course, it is not that Markowitz was the first man to come up with the desirability of portfolio diversification. Even the Babylonian Talmud of 500 C.E. advocates diversification as it proclaims a simple rule: one-third in real estate, one-third in merchandise, and the remaining in liquid assets. For that matter, even Shakespeare’s Antonio of The Merchant of Venice says, “… I thank my fortune for it,/My ventures are not in one bottom trusted, /Nor to one place; nor is my whole estate/Upon the fortune of this present year …” Later in 1738, Daniel Bernoulli argued in one of his articles that risk-averse investors prefer to diversify: “…it is advisable to divide goods which are exposed to some small danger into several portions rather than to risk them all together”. 

Thus, diversification of investment is an age-old phenomenon. But what makes Markowitz’s advocacy for diversification different and pioneering is that he provided a quantitative framework to analyze the merit of a portfolio as a whole. This methodology facilitated investors to assess the degree and returns of diversification of a chosen portfolio through three important variables, namely, return, standard deviation, and coefficient of correlation. 

His theory proposes the idea of risk-return trade-off, which means that investors aiming for higher returns must be prepared to take more risk. It is from this assumption that the idea of the efficient portfolio—one that offers the highest expected return for any given degree of risk, or that has the lowest degree of risk for any given expected return—emerged. In fact, this idea of an efficient frontier became the guiding principle of investors who had till then been relying on ad hoc rules, or gut feeling for investment decisions, albeit with several modifications. 

Nevertheless, there are financial analysts who complain that Markowitz’s framework suffers from two weaknesses. The first is its reliance on the correlation matrix of returns from the portfolio that reveals the extent to which any two assets move together. According to them, even a small change in correlation values leads to significant differences in the conclusions drawn, and hence believe that structuring a portfolio based on such values may not yield the intended results. 

The second complaint is about Markowitz’s selection of standard deviation of returns as a proxy for risk. The analysts, arguing that risk is the possibility of the expectations going wrong, while uncertainty is not knowing what the future might bring in, opine that standard deviation, the statistical metric which focuses on dispersion, cannot differentiate a return higher than expected from that of lower than expected. To put it otherwise, standard deviation captures uncertainty but not the risk of the portfolio.  

Of course, this shortcoming though came to light with the emergence of risk metrics such as VaR which focuses on financial losses, theoreticians were said to be slow in accepting it while investors moved away from Markowitz’s ideas much faster. Great ideas often suffer from implementation challenges! 

This does not mean that Markowitz’s theory is wrong. On the contrary, his ideas of risk-return trade-off, the efficient frontier, and the merits of diversification have remained as the fundamental principles for all that happened in the field of financial economics since 1952. 

His innovative approach to portfolio management stood the test of time and still continues to be the benchmark against which the emerging alternatives are assessed. And the world of practicing financial professionals should be thankful for his seminal ideas.

 


October 22, 2021

‘Causal Inferences’ Led to Nobel Prize

Come October, I eagerly look forward to The Royal Swedish Academy’s announcement of Nobel prizes, particularly in economics. And like many others, I was delighted to hear this year's prize going to economists who are still very active in their fields of research.  

This year, The Sveriges Riksbank Prize in Economic Sciences in memory of Alfred Nobel is awarded with one half of prize money to David Card of University of California, Berkeley, USA, “for his empirical contributions to labour economics”; and the other half jointly to Joshua D. Angrist of Massachusetts Institute of Technology, Cambridge, USA, and Guido W. Imbens of Stanford University, USA, “for their methodological contributions to the analysis of causal relationships”.

In the past—to be precise, till these three brilliant econometricians came up with their empirical studies to analyse the labour market effects of minimum wages, immigration and education—economists struggled to figure out whether an observed relationship between two variables is causal or coincidental. For, unlike in sciences, it is not possible in social sciences to conduct rigidly controlled randomised experiments to verify the causal relationships.

It is against this background that David Card along with his late co-author Alan Krueger analysed the labour market effects of rise in minimum wages and published the findings in their paper—Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania—that revolutionized empirical research in economics.

Taking advantage of the naturally occurring situations viz., the raised minimum wage in the State of New Jersey and no such raise in the neighbouring State of Pennsylvania, Card and Krueger, treating these policy changes as though naturally occurring random variations, studied the impact of rise in minimum wage on employment. Treating the fast-food restaurants in New Jersey as ‘treatment’ group and their counterparts in the neighbouring Pennsylvania State as ‘control’ group, they tested the hypothesis that raising minimum wages lowers labour demand by comparing changes in employment levels before and after the wage rise in New Jersey and found “no indication that the rise in the minimum wage reduced employment.”

The results of this study, challenging the conventional wisdom—textbook competitive model of the labour market which predicts that rise in minimum wage will have negative impact on employment—had simply offered a new way of doing economic research. Continuing with the new-found concept of ‘natural experiments’, Card studied a variety of other important policy questions such as: How does immigration affect employment levels and wages of native workers? How does a more number of years of education affect a student’s future income? His pioneering work that defied conventional thinking, simply revolutionised research not only in the field of economics but also in all other social sciences, besides bettering our understanding of how the labour market operates in real world situations. 

Taking this path breaking research forward, Prof. Imbens and Prof.  Angrist have made innovative methodological contributions to draw precise conclusions about cause and effect from the ‘natural experiments’ whose results were otherwise found to be difficult to interpret. For instance, it is said that in the US, graduates of private universities earn higher wages than public university graduates. This phenomenon tempts one to quickly jump to the conclusion that private universities cause wages to go up. But the research of these two econometricians offered correction for ‘selection bias’, i.e., adjusted for the fact that SAT scores and family incomes are higher for students of private universities. Thus comparing ‘like’ with ‘like'—apples with apples—they found that attending to private universities does not confer a wage premium. 

It is by developing such causal techniques basing on a comparison of observed outcomes with counterfactuals–the ‘what if’ scenarios or potential outcomes that are not observed–Prof. Imbens and Prof. Angrist have eventually proposed effective mathematical and statistical methods to disentangle causal effects from messy observed data. They proposed a simple two-step process to estimate causal effects: First, use “instrumental variables” to mimic the threshold difference between the two separate groups meant for comparison; and second, while evaluating the effects, explain the assumptions needed to be factored in—developing the Local Average Treatment Effect (LATE). Together, these two steps are supposed to boost transparency and credibility of empirical research. And these techniques were replicated by many researchers in different contexts validating the effectiveness of their contributions.   

Now, one may wonder if— in the present age of data science and AI—these econometric techniques would still hold good in interpreting causal relations. But Prof. Angrist argues that big data may help in ‘curve fitting’—in showing a pattern—but does not throw light on causation. Since it does not explain the reasons behind the pattern nor do offer any scope for evaluating counterfactual scenarios, we still need econometric tools.   

As the Nobel Committee observed, these three brilliant econometricians, laying foundation for the “design-based approach, have radically changed how empirical research is conducted over the past 30 years”, besides paving the way for a great improvement in our ability to answer causal questions that in turn enhanced effectiveness of economic and social policies.   

**

November 10, 2020

A ‘Nobel’ Reward for Auction ‘Engineers’

Paul R. Milgrom and Robert B. Wilson, both of Stanford University, have been jointly awarded this year’s Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for “improvements to auction theory and inventions of new auction formats”. 

The practice of selling valuable items through auctions to a highest bidder has been in vogue since ages. Two simple auction formats, namely the English auction, where ascending bids are made until one buyer is left willing to pay a certain amount and a higher bid is not received during a given time period; and the Dutch auction, where auctioneer sets a high opening price which is gradually reduced until a bidder is found, have dominated the scene for long. However, as their usage has expanded covering variety of assets, auctions have become more complex, besides acquiring far greater importance.  

In fact, it is William Vickrey, who first came out with auction theory in 1960s by narrowly focusing his study on each person’s private or subjective evaluation of the goods or services for sale. But in reality, each bidder’s valuation of the good being auctioned is not independent of the other bidder’s valuation as is noticed in English auction, where each bidder, after having started at a low price, moves on to higher prices depending on the price quoted by other bidders, which is, of course, again based on their own private information, and thus arrives at a ‘common value’. 

It thus shows that entirely private values are a rarity in auctions. Robert Wilson who became the first economist to research taking common value—a value that is uncertain at the beginning of the auction, but in the end it is the same for everyone—of the good into consideration, using game theory, showed in three of his classic papers published in 1960s and 70s how the best bidding strategies in common value auctions leads to low bids as participants in an auction try to avoid the ‘winner’s curse’—overestimating the common value and winning the auction at too high a price. He also showed that the problems caused by the winner’s curse are even greater when some bidders have better information than others. It means that bidders with information disadvantage will bid even lower or may even abstain from participating in the auction. 

Analyzing the bids in auctions with private and common values have turned out to be trickier than what Vickrey and Wilson thought of, for the technology/specialization owned by a bidder significantly differentiates the private values of a good from one bidder to another. For instance, private value of an oilfield not only depends on the estimate of the oil reserve, but also on the cost of extraction which depends on the technology used by a company and hence it varies from bidder to bidder. This riddle was finally cracked by Paul Milgrom—incidentally a doctoral student of Robert Wilson—in a couple of papers published by him in the 1980s. His research revealed that the auction structures that elicit more private information from bidders, such as English auctions, where every bidder observes who bids what and who drops out at what price, reduce the winner’s curse problem vis-à-vis with formats such as sealed bids that divulge very little private information. Looking at this link, it becomes imperative that a seller, in his own interest of maximizing his revenue, may have to provide bidders as much information as possible about the goods being auctioned. 

To their credit, Milgrom and Wilson have not limited themselves to developing fundamental auction theory. They have also put their theoretical knowledge to practical use: evolved new and better auction formats for complex situations where the existing formats were found inadequate. In the 1990s when the US Congress permitted the Federal Communications Commission to use auctions to sell radio spectrum to telecom companies, it posed a big challenge to the Commission, for no one knows how it works in the light of the value of a piece of spectrum in a specific region as it also depends on the other frequency bands owned by a specific bidder. Interestingly, to tackle this problem, Milgrom and Wilson, partly in association with Preston McAfee, came up with a new auction format, the Simultaneous Multiple Round Auction (SMRA) that offers all pieces of spectrum simultaneously for bidding. The SMRA model offered scope for participants to bid on all items in a number of rounds as a result of which some information about bids and prices is revealed to bidders whereby the winner’s curse is reduced. Since then many other countries, including India, have followed this model successfully. 

These two economists, who developed auction theory, turning engineers have also designed auction models, which have benefited buyers, sellers, and society as a whole, besides enabling them to win a Nobel Prize.        

 

October 16, 2019

John B Goodenough: Nobel at the age of 97




The father of lithium batteries, John Goodenough, who has been awarded Nobel Prize in Chemistry for 2019 along with two others is still very active in research at the age of 97.  


John Goodenough of the University of Texas at Austin won Nobel Prize in Chemistry for 2019 along with two others—Stanley Whittingham of Binghamton University, New York and Akira Yoshinoof Meijo University—for his work on rechargeable lithium-ion  battery that today powers everything from cell phones to laptops and electric vehicles.

Goodenough, aged 97, is the oldest ever winner of a Nobel Prize. It was in the ’80s that Prof. Goodenough, who, on moving to Oxford from the US as professor, picked up the work that Prof. Whittingham carried out to develop lithium batteries as a scientist at Exxon in the US in the early 1970s but discontinued the same in the early ’80s as the oil company cut back its expenditure on research.  

Prof. Goodenough, predicting that a cathode made of metal oxide than a sulphide would have greater potential, had improved the battery’s performance by introducing new materials—cobalt oxide— for its electrodes. The Nobel committee has considered this has a “decisive step towards the wireless revolution”.

Prof. Akira Yoshinoof and his colleagues at Asahi Kasei, the Japanese chemicals company, picking up Goodenough’s cathode as a basis and using petroleum coke—a carbon material—in the anode, developed the first commercially viable lithium-ion battery in 1985. 

Thus came into market a lightweight hardwearing battery that could be charged hundreds of times before its performance deteriorated. Their introduction in 1991 had revolutionized our lives. To quote Akira, “the way [these] batteries store electricity makes them very suitable for a sustainable society.”

So, rewarding such a work with a fitting prize is, no doubt, a good news. But to my mind, what struck as a real big news is: Prof. Goodenough is actively pursuing his research interests—“studying relationships between the chemical, structural and electrical properties of solids, addressing fundamental solid-state problems in order to design new materials that can enable an engineering function”—as Virginia H. Cockrell Chair in Engineering at The University of Texas at Austin, USA, at the age of 97 and even publishing research papers. 

This 97-year-old professor, believing that “We have to … make a transition from our dependence on fossil fuels to a dependence on clean energy” and saying, “So that’s what I’m currently trying to do before I die”, comes to his lab every morning before 8 a.m. and with a small flock of graduate students and postdoctoral researchers works on designing a new battery to reduce our dependence on fossil fuels.

In line with this ambition, he along with a colleague, Maria H Barga, a senior research fellow, published a paper in Energy & Environmental Science in December 2016 about a glass battery—a type of solid state battery with a glass electrolyte and lithium or sodium metal electrodes, that indeed generated controversies, because they have also claimed that its storage capacity increases with age. 

Controversies because: thermodynamics perhaps maintains that a battery only deteriorates over many charge-discharge cycles. Of course, Goodenough and Barga have an explanation for the controversy raised. According to them, their glass electrolyte is of ferroelectric material. Its polarization switches back and forth in the presence of an outside field.  As a result, the charge-discharge cycles are indeed jiggling the electrolyte back and forth and over a period, this is perhaps leading to emergence of an ideal configuration of each electromagnetic dipole. 

Controversies apart, what is worth noting here is the active engagement of Prof. Goodenough in research even at the age of 97 and his craving to develop and offer a product that is good for the world. His desire to do good for the society well echoes in his comments on science and its utility, which indeed merits everyone’s attention: “Technology is morally neutral—you can use it for good and for evil. You can use it to explode bombs under somebody’s vehicle. You can use it to steal a bank account. As scientists, we do the best we can to provide something for society. But if society cannot make the moral decisions that are necessary, they only use it to destroy themselves.”  

Above all, there is another statement that he made after receiving the Nobel Prize that calls for our deep reflection: “They don’t make you retire at the University of Texas at a certain age, so I’ve had an extra 33 years and I’m still working every day.”

This makes me wonder, why our Universities are not encouraging such possibilities in our campuses. …. Secondly, whatever little of such possibilities that we hear from here and there, say for instance one such facility offered to retired professors by JNU, they are all steeped in murky controversies. Now the disturbing question that a layman on street on listening such episodes from abroad faces is: Are we not capable of cultivating and nursing such a healthy work culture in our universities?


Recent Posts

Recent Posts Widget