Presented by Ople.ai
In recent years, AI has emerged as an undisputed competitive advantage, which means smart business leaders facing the digital transformation have some really familiar questions: How do you measure success? What questions do you need to ask to ensure that you’re making the right decisions right out of the gate? And most importantly, how do you implement AI in your business to deliver a positive impact?
The answers to those questions all fall under one umbrella, essentially: Do you choose a centralized approach or a decentralized approach to embedding AI into your business? There are pros and cons to both, says Pedro Alves, founder and CEO at Ople.ai.
Centralized vs. decentralized
The first step of any data science project is to define a problem within the business — often referred to as strategic problem formulation — and develop a data set for that problem. Once that is complete, you enter what Alves calls the data science technical chasm. The chasm is the point at which the data scientist finishes all the technical steps, such as feature engineering and algorithm selection, and steps back. Once the AI model is built, the end business user begins to use the solution.
It’s within this process the roadblocks happen, says Alves, and the biggest problem with both centralized and decentralized AI is communication. For the centralized team, there’s great communication within the technical area, but not so much with the business team that’s actually going to use the product. With decentralized, it’s the other way around. There’s not a lot of communication within that technical chasm, as Alves calls it, but there’s good communication at the beginning and the end of the process.
The challenges for each
In a centralized approach, the team is isolated from all other groups they’re serving within an organization, and they’re not well-embedded into the space that these internal customers occupy or the nuances of the problems that each of these business units are facing. Due to this distance from the various teams, the lack of real communication can lead directly to problems in building useful data science or useful AI.
With the decentralized approach, every team is equipped with its own data scientists, but communication is again the issue. In this situation, it’s the communication between each group of data scientists within the business as a whole. You gain the advantage of embedding AI capabilities directly within each team, increasing the communication in each individual space. Still, you lose the cohesive approach toward data science for your entire organization.
In other words, when data scientists are working together, they operate at their highest level. However, that doesn’t happen when they have been isolated from each other. They are unable to capture the synergy between projects that might have been developed if they were connected closely with other scientists, teams, and departments.
The hiring dilemma
One of the eternal problems of the AI boom is hiring enough data scientists, of high enough quality, to fit either model. Addressing the centralized aspect, Alves says that the volume of products that a core AI department can put out is low. “I’ve never seen a centralized data science or AI team that can serve all the entities within the organization that need it,” Alves explains. “There are only so many people they can hire and so many projects they can do.”
With a decentralized team, departments can run into problems in their ability to hire the right people to meet their needs.
“It’s easier to get that super talent when you have a really cool AI project or goal,” Alves says. “When you’re not a top-notch tech company, but you need a super high advanced AI person, it’s less enticing, so then it’s harder to hire. The talent you can hire is lower, so the quality of the work is lower, and how quickly you can hire people to do projects is lower on the decentralized approach.”
The hybrid approach — not the best of both worlds
The hybrid approach might sound like the best solution — but it’s essentially either a bandaid, or only feasible for some companies, and certainly not ideal.
It means that companies are trying to solve the problem by building a centralized team to be the driver of technology and innovation for the company, and at the same time, using decentralized teams in the various departments within the company, in order to improve communication between each departmental team and the data scientists. It creates a platform for the embedded departmental AI teams to use and empowers them from a technical perspective.
The real solution
Whether you choose to go with a centralized or decentralized solution, the way to create processes and scientist-to-business communication is with a platform.
If you have a platform that automates the technical component of a data science project, which is where data scientists spend 99% of their time, suddenly they have time to have conversations within different teams. Instead of two to three months, projects can now be completed in hours to days, says Alves, letting centralized departments handle more volume.
When you look at decentralized AI, the same platform solves the problems, but from a different perspective. If all the technical parts are being automated by the platform, the company will have a much easier time hiring a data scientist because the requirements for knowledge and experience are lower. When the same platform is being used across the company, the best practices and best processes can be achieved through the platform.
“No matter which way you go, the automation of the process will solve the problems one way or another,” Alves says. “If you’re truly automating things in a way that’s useful, easy to use, and of high enough quality that a great data scientist would appreciate, you’re solving the problem.”
Author: VB Staff.
Source: Venturebeat