Opinion: supply drones, robots, even insurance coverage – wildly ignited desires of synthetic intelligence startups trigger nightmares for know-how traders

Theranos CEO Elizabeth Holmes was a compelling promoter. She has convinced many supposedly intelligent people that Theranos has developed a technology that can take a few drops of blood from a finger prick to test for countless diseases. The Theranos hype turned out to be just another point on the “Fake-it-Till-You-Make-it” BS spectrum of Silicon Valley. Last January, Holmes was convicted of wire fraud and conspiracy to commit fraud.

Theranos is not unique, although successful prosecutions are rare. As the personality mantra says, “We don’t sell products; we sell dreams. Too often, investors are seduced by products and technologies they do not understand. Mysterious complexity only adds to the appeal: “If we don’t understand them, they have to be really smart.”

For several years, the center of the dream universe has been artificial intelligence, which Sundar Pichai, Alphabet’s GOOG,
-3.04%
CEO, comparing himself to the exploitation of fire and electricity by mankind. The Association of National Advertisers has chosen “AI” as the marketing word of the year 2017.

AI is indeed good at performing strictly defined tasks that require huge memory and fast calculations, but it is fragile and unreliable in tasks that require more than identifying statistical patterns in test data. Therefore, machine learning pioneer Andrew Ng warned that “those of us who work in machine learning are really good at doing well on a series of tests, but unfortunately implementing a system takes more than good on a set of tests. »

The real world is messy and AI algorithms struggle with messy data and complex goals. In Go, for example, the rules and goal are clear and AI algorithms can beat the best human players. However, if the rules are changed or the goals cannot be quantified, the algorithms would break down. If, for example, the 19 x 19 GB network in use today were replaced with the 17 x 17 board used centuries ago, human experts would still play expertly, but AI algorithms trained on a 19 x 19 board would be hopeless. If the goal was to create an aesthetic stone pattern, AI algorithms would have no idea.

Similarly, Large Language (LLM) models are currently in vogue because they are surprisingly good at mimicking human conversation and answering simple factual questions that have correct / incorrect answers. However, they often struggle to answer questions that involve opinions about unknown situations. Consider these exchanges that the author had with the OpenAI GPT-3 LLM, which has 450 gigabytes of input data and 175 billion parameters:

Gary: Is it safe to walk up the stairs on my hands if I wash them first?

GPT-3: Yes, it is safe to climb stairs on your hands if you wash them first.

Gary: Should fans wear night vision goggles when the World Series is played at night?

GPT-3: There is no right or wrong answer to this question, it is simply a question of preference. Some fans may find it more comfortable to wear night vision goggles during night games, while others may not feel the need.

Gary: There is a positive correlation between dice throwing and interest rates. If the dice increase, should I buy a cup of coffee?

GPT-3: It is difficult to answer without more information. Generally speaking, if there is a positive correlation between rolling the dice and interest rates, it might be a good idea to buy a cup of coffee. However, it is always best to talk to a financial advisor for the most accurate advice.

Real world challenges

Unfortunately, most potential uses of AI in the real world involve evolving situations with unclear goals. For example, shortly after IBM’s Watson defeated top human players in Jeopardy, IBM IBM
-1.28%
boasted that Watson would make a revolution in healthcare: “Watson can read every health text in the world in seconds, and this is our first priority, to create‘ Dr. Watson, if you will.

Without a proper understanding of the meaning of the word, Watson was a big failure. IBM spent more than $ 15 billion on Watson without any peer-reviewed evidence that it improved patient health outcomes. IBM’s internal documents have identified “several examples of dangerous and inaccurate treatment recommendations.” After more than a year of searching for customers, IBM sold data and some algorithms to a private investment firm last January for about $ 1 billion.

Another example: an insurance company with a strange name Lemonade LMND,
-7.00%
was founded in 2015 and went public on July 2, 2020, with a closing price of $ 69.41, which is more than double the IPO price of $ 29. On January 22, 2021, the shares reached a maximum of $ 183.26.

What was that? Lemonade sets its insurance rates using an AI algorithm to analyze user responses to 13 questions posed by the AI ​​chatbot. CEO and co-founder Daniel Schreiber argued that “AI breaks people in chess, for example, because it uses algorithms that no human could create and no one fully understands” and, similarly, “algorithms we can’t understand can make insurance more just.

How does Lemonade know that his algorithm is “incredibly predictable” when the company has only been around for a few years? They don’t. Lemonade’s losses have been increasing quarterly, and the stock is now trading at less than $ 20 per share.

Lilly: Once highly regarded, “unicorn” startups are being stabbed, and investors and sponsors have stopped believing in them

Need more evidence? AI robots have been advertised for more than a decade. In 2016, Wayma CEO John Krafcik said technical problems were solved: “Our cars can now withstand the most difficult driving tasks, such as detecting and responding to emergency vehicles, controlling multiple lanes in four directions and predicting what is unpredictably people will do it. ” do on the road.

Six years later, robotic taxis continue to fail occasionally and often rely on human assistance in cars or remotely. Waymo has spent billions of dollars and is still largely limited to places like Chandler in Arizona, where there are wide, well-marked roads, poor traffic, few pedestrians – and low revenues.

Drones are another AI dream. On May 4, 2022, the AngelList Talent Newsletter announced that drones are reshaping the way business is done in a dizzying array of industries. They are used to deliver pizza and vital medical supplies, monitor forest health and capture dropped rocket boosters, to name a few. All of these are, in fact, experimental projects that are still battling basic problems, including noise pollution, invasion of privacy, bird attacks, and the use of drones to target.

These are just a few examples of the reality that startups are too often funded with dreams that turn out to be nightmares. We are reminded of Apple, Amazon.com, Google and other big IPO successes and we forget thousands of failures.

Recent data (May 25, 2022) from Florida University Finance Professor Jay Ritter (“Mr. IPO”) show that 58.5% of the 8,603 IPOs issued between 1975 and 2018 had a negative return in three years, and 36.9% lost more than 50% of their value. Only 39 IPOs have produced the return of over 1000% that investors crave. The three-year average IPO yield was 17.1 percentage points lower than the wider U.S. market. Buying stocks of well-run companies at reasonable prices has been and will remain the best strategy to sleep on both ears.

Jeffrey Lee Funk is an independent technology consultant and former college professor specializing in new technology economics. Gary N. Smith is Fletcher Jones Professor of Economics at Pomona College. He is the author of The AI ​​Delusion (Oxford, 2018), co-author (with Jay Cordes) of The 9 Pitfalls of Data Science (Oxford 2019) and author of The Phantom Pattern Problem (Oxford). 2020).

after: This venture capital firm thrived during the dot-com crash. What is he doing now.

Read also: The target is another subtle step towards a pronounced metaverse

Leave a Comment