AI & RoboticsNews

4 of the worst ways to use AI

Join Transform 2021 this July 12-16. Register for the AI event of the year.


As the pandemic further accelerates our digital transformation, companies are relying even more on automation and particularly on artificial intelligence. Two-thirds of CEOs surveyed last year by a major consulting firm said they will use AI even more than before for the creation of new workforce models. Even higher numbers plan to digitize operations, customer interactions, business models, and revenue streams. This huge acceleration and shift will surely bring massive failures, leaving companies — and in some cases even critical infrastructure — vulnerable to loss as critical decision-making is handed off to AI.

As a technologist who has built platforms and worked in the major industries that employ AI often (such as FinTech and health care), I have seen first-hand what goes wrong when some of the world’s biggest companies leave their intelligence to their AI. Based on the hype around AI, it would appear that everything can be improved by sophisticated algorithms sifting through masses of data. From streamlining customer care to inventing new perfumes, and even coaching soccer teams, AI looks like an unstoppable purveyor of competitive advantage, and practically all that company executives have to do is let it loose and go have lunch (cooked by an AI Robot Chef) while they watch their company’s profits climb.

Sadly, what we see in execution is a different world altogether — from mismanaged expectations to extremely expensive failures and mistakes. All too often AI is not the shining light to a bright company future but rather the blind leading the blind down the wrong path, until someone falls off a cliff. Many of those most responsible for hype in the world of artificial intelligence have never written a line of code, let alone deployed AI in production. Even few developers have the incentive to give you a dose of reality. I am happy to share some of the practical failures of AI, based on my own experience, and provide some insight into what decision makers need to be aware of.

Here are some of the worst ways to use AI, as demonstrated by hedge funds, Wall Street investment banks and companies from Fortune 100 enterprises down to startups:

1. Making decisions based on the wrong data

AI is great at finding patterns in huge datasets and is efficient at predicting outcomes based on those patterns and finding uncorrelated alpha (hidden patterns in the dataset). But big problems arise when the wrong data (or outlier information) gets pulled into the dataset. In one famous example in the late 2000s there was a military coup in Thailand, and the algorithm behind a major fund interpreted that military coup as a market event, shorting a lot of Asian equities, quickly losing nine figures in dollar value.

I worked with a small hedge fund focused on the TMT sector (technology, media, telecommunications). The founders came from a large financial firm and brought with them some programmers who had worked on trading systems for REITS and the energy sector. Their goal was to build an analytics engine that the traders in this new TMT-focused hedge fund could use for signaling certain key events. The problem was that their developers had copied and merged two engines that were built for REITS and energy trading and tried to fit technical indicators more relevant to the TMT sector into this hybrid. I was called in to do forensics after it was built because the engine was giving sporadic and inconsistent results, and it took us some time to realize that, although the founders thought their team had built a brand new engine designed specifically for TMT, it was really built from spare parts. Although all three sectors share similar technical indicators, like implied volatility, or 52-week moving average, AI is sensitive and detailed work and each engine should be customized.

Make sure your AI is making recommendations based on relevant data.

2. Failing to train your AI properly

You can feed your AI engine all the right data and have it spit back the right answers, but until it gets tested in the wild you don’t know what it will do. Rushing to give it more responsibility than it is ready for is like sending a small child out into the real world — neither one is going to bring good results.

A hedge fund client was interested in AI for quant trading. They had hired an external team to build a proof-of-concept model of an AI engine, and my team later provided consulting services on that POC. To build a good AI engine to generate alpha (meaning, to increase the measure of ROI on investment), you would need to have a large dataset of historical data that is homogeneous. In finance, datasets are structured in a time series format, offering data points with widely different levels of granularity. Generally, the more data points we have in a dataset, the better the engine will be at detecting future results; those data points will help a well-designed engine discover non-linear correlation of data to help in predictions. We advised the client to give the AI engine more time to process their dataset to make it more homogeneous and to not follow mainstream pricing signals. Sadly, they didn’t, and the POC was overfitting and so finely tuned to the specific dataset that it didn’t work with new, unseen data. In other words, it was useless with live market data.

Give your AI time to process and learn new information.

3. Ignoring the human responsibility for decisions

No matter what you program your AI to do, it will not share your human goals or bear their consequences. Thus we’ve seen early examples of AI leading early GPS users into a river, or deleting critical information to “minimize” differences in a dataset.

I’ve seen more than one startup built on the assumption that AI algorithms can learn credit approval models and replace the credit approval officer in granting/denying credit loans. However, when you are denied a credit loan, federal law requires a lender to tell you why they made that decision. Software doesn’t really make decisions (it just identifies patterns) and isn’t responsible for decisions; humans are. Since federal law agrees that humans are responsible for credit decisions, many of these startups burned through venture capital funds and then could not legally launch to customers because the AI they developed was inherently biased. When it denied loans, no human could adequately explain why the denial occurred.

Make sure to keep people responsible for human consequences.

4. Overvaluing data

Some data simply can’t be used to build anything useful. One of our clients that failed at using AI was a popular medical diagnosis platform with its own data lake and a broad array of datasets. The company that owned it had acquired another platform with its own array of siloed datasets. The executives wanted to glean some insight into the jumble of disconnected datasets and needed help with onboarding potential customers. The problem was that these datasets were describing different medical issues/profiles, and trying to find common denominators of any real value was not possible. Despite all the compiled information, working with this client’s data was like having lego pieces that didn’t actually connect. Just because they are alike in many respects does not mean that you can build a castle out of them. After consulting with the client, we recommended they not do the project.

A real estate developer wanted us to design AI for build-decision insights, the notion being that if we could ingest enough data from real estate listings, we could perform analytics that would inform their decisions about the layout of their new developments. The client hoped the AI insights would tell them what would be the most optimal layout, based on the location of the building. For example, should the building have one large apartment per floor, or divide the floor into three one-bedroom apartments? Should it be a mixed-use building with commercial stores at the bottom and residential rentals? After we ingested real-estate listings data from different states, worked on dataset analysis, and interacted with subject matter experts, it became obvious that because the real estate market is highly localized, a one-size-fits-all algorithm would not work. The real estate of Midtown Manhattan East is slightly different from Midtown West, and Brooklyn is different from Queens, let alone New York state as an entire dataset. The many datasets were too small independently to build a meaningful classification algorithm, and the project didn’t go through.

In these cases, the companies cut their losses, but many others just keep throwing money at AI, digging deeper into details that never connect to their goals.

Before building an AI engine, make sure you have the right parts.

Where do we go from here?

AI may be the future of this accelerated digital business world, but for now too much of it is still a world of hype. Enterprises should consider AI, but with a grounded view and proper understanding of what it really does. And they should develop a proper perspective for any related initiatives. Decision makers need to understand the reality of AI, the potential it brings, and challenges behind it. Have a clear strategy of any potential AI project, taking into account the data available, timelines, costs, and expectations. Usually, successful AI projects provide long-term results rather than immediate benefits.

AI, at its core, is simply inference; it can help analyze tremendous detail, but only humans can understand the big picture.  We understand that as humans our decisions have consequences (business, legal, ethical), and giving AI false human traits covers up the fact that it does not think like we do. Humans may make more “mistakes” than AI, but we also have more power to recognize them. So, consider “artificial intelligence,” but when you’re thinking about how to deploy it, make sure that your own intelligence is still leading the way.

Ahmad Alokush is the founder of technology boutique Ahmadeus, which advises C-level executives, managing directors, and fund managers on how new technology can impact their market position and overall profitability. He also has served as an expert witness in litigation concerning technology.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Ahmad Alokush, Ahmadeus
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!