Technology Evidence File

Jan19: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
DeepMind’s AlphaStar software has for the first time defeated a top ranked professional player five games to zero in the real time strategy game Starcraft II.
SURPRISE
As DeepMind notes, “until now, AI techniques have struggled to cope with the complexity of StarCraft…

The need to balance short and long-term goals and adapt to unexpected situations, poses a huge challenge…Mastering this problem required breakthroughs in several AI research challenges including:

Game theory: StarCraft is a game where, just like rock-paper-scissors, there is no single best strategy. As such, an AI training process needs to continually explore and expand the frontiers of strategic knowledge.

Imperfect Information: Unlike games like chess or Go where players see everything, crucial information is hidden from a StarCraft player and must be actively discovered by “scouting”.

Long term planning: Like many real-world problems cause-and-effect is not instantaneous. Games can also take anywhere up to one hour to complete, meaning actions taken early in the game may not pay off for a long time.

Real time: Unlike traditional board games where players alternate turns between subsequent moves, StarCraft players must perform actions continually as the game clock progresses.

Large action space: Hundreds of different units and buildings must be controlled at once, in real-time, resulting in a huge combinatorial space of possibilities…

Due to these immense challenges, StarCraft has emerged as a “grand challenge” for AI research.”

DeepMind’s latest achievement is further evidence of the accelerating pace at which AI capabilities are improving. To be sure, Starcraft is still a discrete system, governed by an unchanging set of rules. In that sense, it critically differs from real world complex socio-technical systems, in which agents’ adaptive actions are not constrained by unchanging rules, and system dynamics evolve over time.

In real world complex adaptive systems, making sense of new information in an evolving context, inducing abstract concepts from novel situations, and then using them to rapidly reason about the dynamics of a situation and the likely impact of possible actions remain, for now, beyond the capabilities of the most advanced artificial intelligence systems.

In addition, some critics have noted that DeepStar’s victory owed more to the speed of its play relative to its very talented human opponent, and physical accuracy of its moves (placement of units on a map) than it did to superior strategy (e.g., see, “
An Analysis On How Deepmind’s Starcraft 2 AI’s Superhuman Speed is Probably a Band-Aid Fix For The Limitations of Imitation Learning”, by Aleksi Pietikainen).

But all that said, DeepStar’s success still reminds us that the gap between the capabilities of AI and human beings, even in cognitively challenging areas, is closing faster than many people appreciate
Quantum Terrorism” Collective Vulnerability of Global Quantum Systems” by Johnson et al.
SURPRISE
Quantum computing, while exponentially more powerful than today’s technology, will also bring new vulnerabilities. The authors “show that an entirely new form of threat arises by which a group of 3 or more quantum-enabled adversaries can maximally disrupt the global quantum state of future systems in a way that is practically impossible to detect, and that is amplified by the way that humans naturally group into adversarial entities.”
Shoshana Zuboff’s new book, “The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power” crystallizes the often unspoken worries that many people have felt about exponentially improving artificial intelligence technologies.
SURPRISE
Writing in the Financial Times Zuboff describes “a new economic logic [she] calls ‘surveillance capitalism’”. It “was invented in the teeth of the dot.com bust, when a fledgling company called Google decided to try and boost ad revenue by using its exclusive access to largely ignored data logs — the “digital exhaust” left over from users’ online search and browsing. The data would be analysed for predictive patterns that could match ads and users. Google would both repurpose the “surplus” behavioural data and develop methods to aggressively seek new sources of it…These operations were designed to bypass user awareness and, therefore, eliminate any possible “friction”. In other words, from the very start Google’s breakthrough depended upon a one-way mirror: surveillance…”

Surveillance capitalism soon migrated to Facebook and rose to become the default model for capital accumulation in Silicon Valley, embraced by every start-up and app. It was rationalised as a quid pro quo for free services but is no more limited to that context than mass production was limited to the fabrication of the Model T. It is now present across a wide range of sectors, including insurance, retail, healthcare, finance, entertainment, education and more. Capitalism is literally shifting under our gaze.”

“It has long been understood that capitalism evolves by claiming things that exist outside of the market dynamic and turning them into market commodities for sale and purchase. Surveillance capitalism extends this pattern by declaring private human experience as free raw material that can be computed and fashioned into behavioural predictions for production and exchange…”

“Surveillance capitalists produce deeply anti-democratic asymmetries of knowledge and the power that accrues to knowledge. They know everything about us, while their operations are designed to be unknowable to us. They predict our futures and configure our behaviour, but for the sake of others’ goals and financial gain. This power to know and modify human behaviour is unprecedented.

“Often confused with totalitarianism and feared as Big Brother, it is a new species of modern power that I call ‘instrumentarianism’. [This] power can know and modify the behaviour of individuals, groups and populations in the service of surveillance capital. The Cambridge Analytica scandal revealed how, with the right knowhow, these methods of instrumentarian power can pivot to political objectives. But make no mistake, every tactic employed by Cambridge Analytica was part of surveillance capitalism’s routine operations of behavioural influence.”

As the
Guardian noted in its review of her book, Zuboff, “points out that while most of us think that we are dealing merely with algorithmic inscrutability, in fact what confronts us is the latest phase in capitalism’s long evolution – from the making of products, to mass production, to managerial capitalism, to services, to financial capitalism, and now to the exploitation of behavioural predictions covertly derived from the surveillance of users.”

“The combination of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. This has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power. But whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart. This is intolerable.”
In light of Zuboff’s book, the provocatively titled article (The French Fine Against Google is the Start of a War”) in the 24Jan19 Economist does not seem excessive.
SURPRISE
“On January 21st France’s data-protection regulator, which is known by its French acronym, CNIL, announced that it had found Google’s data-collection practices to be in breach of the European Union’s new privacy law, the General Data Protection Regulation (GDPR). CNIL hit Google with a €50m ($57m), the biggest yet levied under GDPR. Google’s fault, said the regulator, had been its failure to be clear and transparent when gathering data from users…”

“The fine represents the first volley fired by European regulators at the heart of the business model on which Google and many other online services are based, one which revolves around the frictionless collection of personal data about customers to create personalised advertising. It is the first time that the data practices behind Google’s advertising business, and thus those of a whole industry, have been deemed illegal. Google says it will appeal against the ruling. Its argument will not be over whether consent is required to collect personal data—it agrees that it is—but what quality of consent counts as sufficient…Up to now the rules that underpin the digital economy have been written by Google, Facebook et al. But with this week’s fine that is starting to change.”

The growing public anger in the West over reduced privacy that both Zuboff’s book and the CNIL fine represent has important implications for the race to create ever more powerful machine learning/artificial intelligence capabilities, whose advancement is critically dependent on access to large amounts of training data. In China, data privacy is not an issue. In Europe, it is a very serious issue today. The US currently lies somewhere in between.

While emerging technologies like Generative Adversarial Networks may in future be used to quickly generate high quality simulated data that can be used to train AI, we aren’t there yet. Until we are, the data privacy issue will be inextricably linked to the pace of AI development, which in turn has national security, as well as economic and social implications.
We analyzed 16,625 papers to figure out where AI is headed next” by Karen Hao, in MIT Technology Review, 25Jan19
“The sudden rise and fall of different techniques has characterized AI research for a long time, he says. Every decade has seen a heated competition between different ideas. Then, once in a while, a switch flips, and everyone in the community converges on a specific one. At MIT Technology Review, we wanted to visualize these fits and starts. So we turned to one of the largest open-source databases of scientific papers, known as the Arxiv (pronounced “archive”). We downloaded the abstracts of all 16,625 papers available in the “artificial intelligence” section through November 18, 2018, and tracked the words mentioned through the years to see how the field has evolved…”

”We found three major trends: a shift toward machine learning during the late 1990s and early 2000s, a rise in the popularity of neural networks beginning in the early 2010s, and growth in reinforcement learning in the past few years…
The biggest shift we found was a transition away from knowledge-based systems by the early 2000s. These computer programs are based on the idea that you can use rules to encode all human knowledge. In their place, researchers turned to machine learning—the parent category of algorithms that includes deep learning…Instead of requiring people to manually encode hundreds of thousands of rules, this approach programs machines to extract those rules automatically from a pile of data. Just like that, the field abandoned knowledge-based systems and turned to refining machine learning…”

“In the few years since the rise of deep learning, our analysis reveals, a third and final shift has taken place in AI research. As well as the different techniques in machine learning, there are three different types: supervised, unsupervised, and reinforcement learning. Supervised learning, which involves feeding a machine labeled data, is the most commonly used and also has the most practical applications by far. In the last few years, however, reinforcement learning, which mimics the process of training animals through punishments and rewards, has seen a rapid uptick of mentions in paper abstracts… [The pivotal] moment came in October 2015, when DeepMind’s AlphaGo, trained with reinforcement learning, defeated the world champion in the ancient game of Go. The effect on the research community was immediate…

Our analysis provides only the most recent snapshot of the competition among ideas that characterizes AI research. But it illustrates the fickleness of the quest to duplicate intelligence… Every decade, in other words, has essentially seen the reign of a different technique: neural networks in the late ’50s and ’60s, various symbolic approaches in the ’70s, knowledge-based systems in the ’80s, Bayesian networks in the ’90s, support vector machines in the ’00s, and neural networks again in the ’10s. The 2020s should be no different, meaning the era of deep learning may soon come to an end.”
“It’s Still the Prices, Stupid”, by Anderson et all in Health Affairs
As we have noted, healthcare and education are critical “social technologies”, particularly in a period of rapid change and heightened uncertainty about employment (which, for many Americans, is the source of their health insurance). Improving the effectiveness, efficiency, and adaptability of both these technologies will have a critical impact on the economy, society, and politics in the future.

The authors of this article update a famous 2003 article titled “It’s the Prices, Stupid”, which “found that the sizable differences in health spending between the US and other countries were explained mainly by health care prices.”

The authors of the present article find that, “The conclusion that prices are the primary reason why the US spends more on health care than any other country remains valid, despite health policy reforms and health systems restructuring that have occurred in the US and other industrialized countries since the 2003 article’s publication. On key measures of health care resources per capita (hospital beds, physicians, and nurses), the US still provides significantly fewer resources compared to the OECD median country. Since the US is not consuming greater resources than other countries, the most logical factor is the higher prices paid in the US.”
On the education front, Colorado recently updated its “Talent Pipeline” Report, which provides a stark reminder of how poorly the US education system is performing, even in a state with the nation’s second highest percentage of residents with a bachelors degree or higher (about 40%).
Out of 100 students who complete 9th grade, 70 graduate from high school on time, 43 enroll in college that autumn, 32 return after their first year of college, and just 25 graduate from college within six years of starting it.

Results like these have two critical implications. First, they point to stagnant or declining human capital, which is a key driver of total factor productivity and thus long-term growth, particularly as the economy becomes more knowledge intensive. Second, at a time when the capabilities of labor substituting technologies (like AI and automation) have been improving exponentially, the failure of human capital to keep pace will naturally induce businesses to invest more in the former and less in the latter, which would likely produce worsening economic inequality, rising unemployment, and skyrocketing government spending on social safety net programs – which will have to be paid for wither with higher taxes or unlikely cuts in other spending.
Dec18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
Profiling for IQ Opens Up New Uber-Parenting Possibilities”, Financial Times, 22Nov18
SURPRISE

“A US start-up, Genomic Prediction, claims it can genetically profile embryos to predict IQ, as well as height and disease risk. Since fertility treatment often produces multiple viable embryos, only one or two of which can be implanted, prospective parents could pick those with the “best” genes.” The FT notes the implications of this technology development: “We are sliding into Gattaca territory, in which successive generations are selected not only for health but also for beauty, intellect, stature and other aptitudes. Parents may even think it their moral duty to choose the “best” possible baby, not just for themselves but to serve the national interest. Carl Shulman and Nick Bostrom, from the Future of Humanity Institute at Oxford university, predicted in 2013 that some nations may pursue the idea of a perfectible population to gain economic advantage. China, incidentally, has long been reading the genomes of its cleverest students.” If this technology is scaled up, the economic, national security, social, and political implications will be both profound and highly disruptive.
Natural Language Understanding Poised to Transform How We Work”, Financial Times, 3Dec19
The FT notes, “If language understanding can be automated in a wide range of contexts, it is likely to have a profound effect on many professional jobs. Communication using written words plays a central part in many people’s working lives. But it will become a less exclusively human task if machines learn how to extract meaning from text and churn out reports.” Up to now, an obstacle “in training neural networks to do the type of work analysts face — distilling information from several sources — is the scarcity of appropriate data to train the systems. It would require public data sets that include both source documents and a final synthesis, giving a complete picture that the system could learn from. [However], despite challenges such as these, recent leaps in natural language understanding (NLU) have made these systems more effective and brought the technology to a point where it is starting to find its way into many more business applications.”

The article focuses on technology from Primer (www.primer.ai), a new AI start-up, that has developed more capable natural language understanding technology that is now in use by intelligence agencies.
Why Companies that Wait to Adopt AI May Never Catch Up”, Harvard Business Review, by Mahidhar and Davenport, 3Dec18
SURPRISE

If the authors’ hypothesis is correct, then AI will lead to the intensification of “winner take all” markets, with more companies and business models struggling to earn economic profits. Research has shown that this will also lead to worsening income inequality between employees at the winning companies and everyone else.
Your Smartphone’s AI Algorithms Could Tell if You are Depressed”, MIT Technology Review, 3Dec18
Reports on a Stanford study (“Measuring Depression Symptom Severity from Spoken Langage and 3D Facial Expresisons” by Haque et al) that used “a combination of facial expressions, voice tone, and use of specific words was used to diagnose depression”, with 80% accuracy. This could well turn out to be a two edged sword, with clear benefits for early diagnosis and treatment of mental illness, but equally important concerns about privacy (e.g., its use by employers or insurance companies).
The Artificial Intelligence Index 2018 Annual Report
This report provides a range of excellent benchmarks for measuring the rate of improvement for various AI technologies, and their adoption across industries. Key findings include substantial shortening of training times (e.g., for visual recognition tasks), and the increasing rate of improvement for natural language understanding based on the GLUE benchmark.
Learning from the Experts: From Expert Systems to Machine Learned Diagnosis Models” by Ravuri et al
This paper describes how a model that embodies the knowledge of domain experts was used to generate artificial (synthetic) data about a system that was then used to train a deep learning network. This is an interesting approach that bears monitoring, particularly its potential future application to agent based modeling of complex adaptive systems.
How Artificial Intelligence will Reshape the Global Order: The Coming Competition Between Digital Authoritarianism and Liberal Democracy”, by Nicholas Wright
SURPRISE

A thought provoking forecast of how developing social control technologies could affect domestic politics and strengthen authoritarian governments.
Data Breaches Could Cause Users to Opt Out of Sharing Personal Data. Then What?” by Douglas Yeung from RAND
“If the public broadly opts out of using tech tools…insufficient or unreliable user data could destabilize the data aggregation business model that powers much of the tech industry. Developers of technologies such as artificial intelligence, as well as businesses built on big data, could not longer count on ever-expanding streams of data. Without this data, machine learning models would be less accurate.”
Arguably, with its General Data Protection Regulation (GDPR), the European Union has already moved in this direction. While the author focuses on commercial issues, there are also national security implications if China – where data privacy is not recognized as a legitimate concern – is able to develop superior AI applications because of access to a richer set of training data.
Parents 2018: Going Beyond Good Grades”, a report by Learning Heroes and Edge Research
SURPRISE

Improving education, and more broadly the quality of a nation’s human capital, is critical to improving employment, productivity and economic growth and reducing income inequality. But no system, team, or individual can improve (except by random luck) in the absence of accurate feedback. And this new report makes painfully clear that this is too often missing in America’s K-12 education system.

The report begins with the observation that, “parents have high aspirations for their children. Eight in 10 parents think it’s important for their child to earn a college degree, with African-American and Hispanic parents more likely to think it’s absolutely essential or very important. Yet if students are not meeting grade-level expectations, parents’ aspirations and students’ goals for themselves are unlikely to be realized. Today, nearly 40% of college students take at least one remedial course; those who do are much more likely to drop out, dashing both their and their parents’ hopes for the future…

Over three years, one alarming finding has remained constant: Nearly 9 in 10 parents, regardless of race, income, geography, and education levels, believe their child is achieving at or above grade level. Yet national data indicates only about one-third of students actually perform at that level. In 8th grade mathematics, while 44% of white students scored at the proficient level on the National Assessment of Educational Progress in 2017, only 20% of Hispanic and 13% of African-American students did so. This year, we delved into the drivers of this “disconnect.” We wanted to understand why parents with children in grades 3-8 hold such a rosy picture of their children’s performance and what could be done to move them toward a more complete and accurate view…

Report Cards Sit at the Center of the Disconnect: Parents rely heavily on report card grades as their primary source of information and assume good grades mean their child is performing at grade level. Yet two-thirds of teachers say report cards also reflect effort, progress, and participation in class, not just mastery of grade-level content… More than 6 in 10 parents report that their child receives mostly A’s and B’s on their report card, with 84% of parents assuming this indicates their child is doing the work expected of them at their current grade… Yet a recent study by TNTP found that while nearly two-thirds of students across five school systems earned A’s and B’s, far fewer met grade-level expectations on state tests. On the whole, students who were earning B’s in math and English language arts had less than a 35% chance of having met the grade-level bar on state exams.”
.
Nov18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
“Deep Learning can Replicate Adaptive Traders in a Limit-Order Book Financial Market”, by Calvez and Cliff
“Successful human traders, and advanced automated algorithmic trading systems, learn from experience and adapt over time as market conditions change…We report successful results from using deep learning neural networks (DLNNs) to learn, purely by observation, the behavior of profitable traders in an electronic market… We also demonstrate that DLNNs can learn to perform better (i.e., more profitably) than the trader that provided the training data. We believe that this is the first ever demonstration that DLNNs can successfully replicate a human-like, or super-human, adaptive trader.”

This is a significant development. Along with similar advances in reinforcement learning (e.g., by Deep Mind with AlphaZero), one can easily envision a situation where – at least over short time frames – most humans completely lose their edge over algorithms.

The good news (form humans at least) is that over longer time frames, the structure of the system evolves (and becomes less discrete), and performance becomes more dependent on higher forms of reasoning – causal and counterfactual – where humans are still far ahead of algorithms (and whose sensemaking, situation awareness, and decision making The Index Investor is intended to support ).
Social media cluster dynamics create resilient global hate highways”, by Johnson et al
“Online social media allows individuals to cluster around common interests -- including hate. We show that tight-knit social clusters interlink to form resilient ‘global hate highways’ that bridge independent social network platforms, countries, languages and ideologies, and can quickly self-repair and rewire. We provide a mathematical theory that reveals a hidden resilience in the global axis of hate; explains a likely ineffectiveness of current control methods; and offers improvements…”
The Semiconductor Industry and the Power of GlobalizationThe Economist 1Dec18
“If data are the new oil...chips are what turn them into something useful.” This special report provides a good overview of how the critical and highly globalized semiconductor supply chain is coming under increased pressure as competition between China and the United States intensifies.
There’s a Reason Why Teachers Don’t Use the Software Provided by Their Districts”, by Thomas Arnett
SURPRISE

We have noted in the past that education (like healthcare) is a critical social technology where substantial performance improvement is critical to increasing future rates of national productivity growth and reducing inequality. This study is not encouraging with respect to the impact technology has been having on the education sector.

The authors find that, “a median of 70% of districts’ software licenses never get used, and a median of 97.6% of licenses are never used intensively.
Reports emerged from China that gene editing CRISPR technology to modify a human embryo’s DNA before implanting it in a woman’s womb via IVF. The initial focus was reportedly on producing children who are resistant to HIV, smallpox, and cholera.
SURPRISE

While this has been recognized as a possibility, there was also a belief that it would not happen so quickly, or with so little control. It was also significant that the target of the DNA modification was resistance to smallpox, a disease which is believed to have been eradicated and whose causal agents are now only retained by governments (which makes them potentially very powerful biowar weapons).
Virtual Social Science” by Sefan Thurner.
SURPRISE

Thurner is one of the world’s leading complex adaptive systems researchers, and anything he writes is usually rich with unique insights.

His latest paper is no exception. He reviews findings from the analysis of 14 years of extremely rich data from Pardus, a massive multiplayer online game (MMOG) involving about 430,000 players in which economic, social, and other decisions are made by humans, not algorithms.

This data can be used to develop and test a wide range of social science theories about the behavior of complex adaptive systems at various levels of aggregation, from the individual to the group to the system. It can also be used to evaluate agent-based, and AI driven approaches to predicting the future behavior of complex systems

The author shows how many of the findings from analyzing game data line up with experimental findings based on the behavior of far fewer subjects. This points the way towards a new and potentially much more powerful approach to social science.

However, Thurner also notes the current limits on the extent to which human societies can be understood, and their behavior predicted using this methodology: the inherent “co-evolutionary complexity” of complex adaptive social system, whose interactions cause structures to change over time, often in a non-linear manner.
.
Oct18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?

Using Machine Learning to Replicate Chaotic Attractors”, by Pathak et al

SURPRISE.
Advances in a machine learning area known as “reservoir computing” have led to the creation of a model that reproduced the dynamics of a complex dynamical system. If this initial work can be extended it represents a significant advance. That said, this is not the same thing as AI learning and being able to reproduce and predict the dynamic behavior of a complex adaptive system, such as financial markets, and economies.
The Impact of Bots on Opinions in Social Networks” by Hjouji et al
Using both a model and data from the 2016 US presidential election, the authors conclude that “a small number of highly active bots in a social network can have a disproportionate impact on opinion…due to the fact that bots post one hundred times more frequently than humans.” In theory, this should make it easier for platforms like Twitter and Facebook to identify and close down these bots. The authors also surprisingly found that in 2016 pro-Clinton bots produced opinion shifts that were almost twice as large as the pro-Trump bots, despite the latter being larger in number.
Learning-Adjusted Years of Schooling” by Filmer et al from the World Bank
This valuable new indicator metric combines both the time spent in school and how much is learned during that time. The authors find that LAYS is strongly correlated with GDP growth. They also find wide gaps between countries, with some education systems being much more productive (in terms of learning per unit of time) than others. The good news is that this points to a substantial source of future gains for these economies in total factor productivity, provided their education systems can be improved.
The Condition of College and Career Readiness, 2018” by ACT Inc.
More disappointing results based on a well-known indicator of US K-12 education system performance.

About three-fourths (76%) of 2018 ACT-tested graduates said they aspire to postsecondary education. Most of those students said they aspire to a four-year degree or higher. Only 27% met all four C&C ready benchmarks; 35% met none. Readiness levels in math have steadily declined since 2014. Sample size = 1.9m. “Just 26% of ACT-tested 2018 graduates likely have the foundational work readiness skills needed for more than nine out of 10 jobs recently profiled in the ACT JobPro® database. This has significant (and negative) implications for future productivity and wage growth.
.
Sep18: New Technology Information: Indicators and Surprises
Why Is This Information Valuable?
In a new book, “AI Superpowers”, Chinese venture capitalist Kai-Fu Lee makes an important point: There is a critical difference between AI innovation and AI implementation. Success in the latter depends on the ability to collect and analyze large amounts of data – and this is an area where China is outpacing the rest of the world, because of its size, its state capitalism model, low level of concern with privacy, and its data intensive approach to domestic security.
Provides a logical argument for how and why China could gain a significant advantage in key artificial intelligence technologies.
US House of Representatives Subcommittee on Information Technology published a new report titled “Rise of the Machines.” Highlights: “First, AI is an immature technology; its abilities in many areas are still relatively new. Second, the workforce is affected by AI; whether that effect is positive, negative, or neutral remains to be seen. Third, AI requires massive amounts of data, which may invade privacy or perpetuate bias, even when using data for good purposes. Finally, AI has the potential to disrupt every sector of society in both anticipated and unanticipated ways.”
Report’s conclusions are an interesting contrast to Kai-Fu Lee’s. Similar to critiques by Gary Marcus and Judea Pearl, it highlights the limitations of current AI technologies, which suggests we are further away from a critical threshold than many media reports would suggest. That said, it also agrees that once a critical threshold of AI capability is reached, it will have strong disruptive effects.

However, the report agrees with Lee that privacy concerns are a potentially important constraint on AI progress.
China is Overtaking the US in Scientific Research” by Peter Orzag in Bloomberg Opinion. Not just the quantity, but also “the quality of Chinese research is improving, though it currently remains below that of U.S. academics. A recent analysis suggests that, measured not just by numbers of papers but also by citations from other academics, Chinese scholars could become the global leaders in the near future.”
Suggests that the pace of technological improvement in China will accelerate.
Quantum Hegemony: China’s Ambitions and the Challenge to US Innovation Leadership”. Center for a New American Security. “China’s advances in quantum science could impact the future military and strategic balance, perhaps even leapfrogging traditional U.S. military-technological advantages. Although it is difficult to predict the trajectories and timeframes for their realization, these dual-use quantum technologies could “offset” key pillars of U.S. military power, potentially undermining critical technological advantages associated with today’s information-centric ways of war, epitomized by the U.S. model.”
Highlights a key area in which faster Chinese technological progress and breakthroughs could confer substantial military advantage.
A Storm in an IoT Cup: The Emergence of Cyber-Physical Social Machines” by Madaan et al. “The concept of ‘social machines’ is increasingly being used to characterize various socio-cognitive spaces on the Web. Social machines are human collectives using networked digital technology, which initiate real-world processes and activities including human communication, interactions and knowledge creation. As such, they continuously emerge and fade on the Web. The relationship between humans and machines is made more complex by the adoption of Internet of Things (IoT) sensors and devices. The scale, automation, continuous sensing, and actuation capabilities of these devices add an extra dimension to the relationship between humans and machines making it difficult to understand their evolution at either the systemic or the conceptual level. This article describes these new socio-technical systems, which we term Cyber-Physical Social Machines.”
Increasing complexity creates exponentially more hidden critical thresholds, and ways for a system to generate non-linear effects.
Notes From the Frontier: Modeling the Impact of AI on the World Economy”, McKinsey Global Institute. Adoption of AI could increase annual global GDP growth by 1.2%. Adoption of AI technologies and emergence of their impact is following typical “S-Curve” pattern. At this point, “the absence of evidence is not evidence of absence” of its potential impact.
Excellent analysis of the current state of AI development, rate of adoption, and range of observed effects.
Critical Point: “Because economic gains combine and compound over time…a key challenge is that adoption of AI could widen gaps between countries, companies, and workers.”
Blueprint: How DNA Makes Us Who We Are” by Robert Plomin. Argues that genetic differences cause most variation in human psychological traits. Accumulating evidence for the dominance of nature over nurture has many potentially disruptive implications.

See also, "Top 10 Replicated Findings from Behavioral Genetics" by Plomin et al
Surprise. The implications of the body of research this book compiles and synthesizes has enormous disruptive potential, at the economic, social, and ultimately political level.