AI & RoboticsNews

How AI and software can improve semiconductor chips | Accenture interview

Accenture has more than 743,000 people serving up consulting expertise on technology to clients in more than 120 countries. I met with one of them at CES 2024, the big tech trade show in Las Vegas, and had a conversation about semiconductor chips, the foundation of our tech economy.

Syed Alam, Accenture‘s semiconductor lead, was one of many people at the show talking about the impact of AI on a major tech industry. He said that one of these days we’ll be talking about chips with trillions of transistors on them. No single engineer will be able to design them all, and so AI is going to have to help with that task.

According to Accenture research, generative AI has the potential to impact 44% of all working hours
across industries, enable productivity enhancements across 900 different types of jobs and create $6 to
$8 trillion in global economic value.

It’s no secret that Moore’s Law has been slowing down. Back in 1965, former Intel CEO Gordon Moore predicted that chip manufacturing advances were proceeding so fast that the industry would be able to double the number of components on a chip every couple of years.

For decades, that law held true, as a metronome for the chip industry that brought enormous economic benefits to society as everything in the world became electronic. But the slowdown means that progress is no longer guaranteed.

This is why the companies leading the race for progress in chips — like Nvidia — are valued at over $1 trillion. And the interesting thing is that as chips get faster and smarter, they’re going to be used to make AI smarter and cheaper and more accessible.

A supercomputer used to train ChatGPT has over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server. The hundreds of millions of queries of ChatGPT consumes about one GigaWatt-hour each day, which is about daily energy consumption of 33,000 US households. Building autonomous cars requires more than 2,000 chips, more than double the number of chips used in regular cars. These are tough problems to solve, and they will be solvable thanks to the dynamic vortex of AI and semiconductor advances.

Alam talked about the impact of AI as well as software changes on hardware and chips. Here’s an edited transcript of our interview.

VentureBeat: Tell me what you’re interested in now.

Syed Alam: I’m hosting a panel discussion tomorrow morning. The topic is the hard part of AI, hardware and chips. Talking about how they’re enabling AI. Obviously the people who are doing the hardware and chips believe that’s the difficult part. People doing software believe that’s the difficult part. We’re going to take the view, most likely–I have to see what view my fellow panelists take. Most likely we’ll end up in a situation where the hardware independently or the software independently, neither is the difficult part. It’s the integration of hardware and software that’s the difficult part.

You’re seeing the companies that are successful–they’re the leaders in hardware, but also invested heavily in software. They’ve done a very good job of hardware and software integration. There are hardware or chip companies who are catching up on the chip side, but they have a lot of work to do on the software side. They’re making progress there. Obviously the software companies, companies writing algorithms and things like that, they’re being enabled by that progress. That’s a quick outline for the talk tomorrow.

VentureBeat: It makes me think about Nvidia and DLSS (deep learning super sampling) technology, enabled by AI. Used in graphics chips, they use AI to estimate the likelihood of the next pixel they’re going to have to draw based on the last one they had to draw.

Alam: Along the same lines, the success for Nvidia is obviously–they have a very powerful processor in this space. But at the same time, they’ve invested heavily in the CUDA architecture and software for many years. It’s the tight integration that is enabling what they’re doing. That’s making Nvidia the current leader in this space. They have a very powerful, robust chip and very tight integration with their software.

VentureBeat: They were getting very good percentage gains from software updates for this DLSS AI technology, as opposed to sending the chip back to the factory another time.

Alam: That’s the beauty of a good software architecture. As I said, they’ve invested heavily over so many years. A lot of the time you don’t have to do–if you have tight integration with software, and the hardware is designed that way, then a lot of those updates can be done in software. You’re not spinning something new out every time a slight update is needed. That’s traditionally been the mantra in chip design. We’ll just spin out new chips. But now with the integrated software, a lot of those updates can be done purely in software.

VentureBeat: Have you seen a lot of changes happening among individual companies because of AI already?

Alam: At the semiconductor companies, obviously, we’re seeing them design more powerful chips, but at the same time also looking at software as a key differentiator. You saw AMD announce the acquisition of AI software companies. You’re seeing companies not only investing in hardware, but at the same time also investing in software, especially for applications like AI where that’s very important.

VentureBeat: Back to Nvidia, that was always an advantage they had over some of the others. AMD was always very hardware-focused. Nvidia was investing in software.

Alam: Exactly. They’ve been investing in Cuda for a long time. They’ve done well on both fronts. They came up with a very robust chip, and at the same time the benefits of investing in software for a long period came along around the same time. That’s made their offering very powerful.

VentureBeat: I’ve seen some other companies coming up with–Synopsis, for example, they just announced that they’re going to be selling some chips. Designing their own chips as opposed to just making chip design software. It was interesting in that it starts to mean that AI is designing chips as much as humans are designing them.

Alam: We’ll see that more and more. Just like AI is writing code. You can translate that now into AI playing a key role in designing chips as well. It may not design the entire chip, but a lot of the first mile, or maybe just the last mile of customization is done by human engineers. You’ll see the same thing applied to chip design, AI playing a role in design. At the same time, in manufacturing AI is playing a key role already, and it’s going to play a lot more of a role. We saw some of the foundry companies announcing that they’ll have a fab in a few years where there won’t be any humans. The leading fabs already have a very limited number of humans involved.

VentureBeat: I always felt like we’d eventually hit a wall in the productivity of engineers designing things. How many billions of transistors would one engineer be responsible for creating? The path leads to too much complexity for the human mind, too many tasks for one person to do without automation. The same thing is happening in game development, which I also cover a lot. There were 2,000 people working on a game called Red Dead Redemption 2, and that came out in 2018. Now they’re on the next version of Grand Theft Auto, with thousands of developers responsible for the game. It feels like you have to hit a wall with a project that complex.

Alam: No one engineer, as you know, actually puts together all these billions of transistors. It’s putting Lego blocks together. Every time you design a chip, you don’t start by putting every single transistor together. You take pieces and put them together. But having said that, a lot of that work will be enabled by AI as well. Which Lego blocks to use? Humans might decide that, but AI could help, depending on the design. It’s going to become more important as chips get more complicated and you get more transistors involved. Some of these things become almost humanly impossible, and AI will take over.

If I remember correctly, I saw a road map from TSMC–I think they were saying that by 2030, they’ll have chips with a trillion transistors. That’s coming. That won’t be possible unless AI is involved in a major way.

VentureBeat: The path that people always took was that when you had more capacity to make something bigger and more complex, they always made it more ambitious. They never took the path of making it less complex or smaller. I wonder if the less complex path is actually the one that starts to get a little more interesting.

Alam: The other thing is, we talked about using AI in designing chips. AI is also going to be used for manufacturing chips. There are already AI techniques being used for yield improvement and things like that. As chips become more and more complicated, talking about many billions or a trillion transistors, the manufacturing of those dies is going to become even more complicated. For manufacturing AI is going to be used more and more. Designing the chip, you encounter physical limitations. It could take 12 to 18 weeks for manufacturing. But to increase throughput, increase yield, improve quality, there’s going to be more and more AI techniques in use.

VentureBeat: You have compounding effects in AI’s impact.

Alam: Yes. And again, going back to the point I made earlier, AI will be used to make more AI chips in a more efficient manner.

VentureBeat: Brian Comiskey gave one of the opening tech trends talks here. He’s one of the researchers at the CTA. He said that a horizontal wave of AI is going to hit every industry. The interesting question then becomes, what kind of impact does that have? What compound effects, when you change everything in the chain?

Alam: I think it will have the same kind of compounding effect that compute had. Computers were used initially for mathematical operations, those kinds of things. Then computing started to impact pretty much all of industry. AI is a different kind of technology, but it has a similar impact, and will be as pervasive.

That brings up another point. You’ll see more and more AI on the edge. It’s physically impossible to have everything done in data centers, because of power consumption, cooling, all of those things. Just as we do compute on the edge now, sensing on the edge, you’ll have a lot of AI on the edge as well.

VentureBeat: People say privacy is going to drive a lot of that.

Alam: A lot of factors will drive it. Sustainability, power consumption, latency requirements. Just as you expect compute processing to happen on the edge, you’ll expect AI on the edge as well. You can draw some parallels to when we first had the CPU, the main processor. All kinds of compute was done by the CPU. Then we decided that for graphics, we’d make a GPU. CPUs are all-purpose, but for graphics let’s make a separate ASIC.

Now, similarly, we have the GPU as the AI chip. All AI is running through that chip, a very powerful chip, but soon we’ll say, “For this neural network, let’s use this particular chip. For visual identification let’s use this other chip.” They’ll be super optimized for that particular use, especially on the edge. Because they’re optimized for that task, power consumption is lower, and they’ll have other advantages. Right now we have, in a way, centralized AI. We’re going toward more distributed AI on the edge.

VentureBeat: I remember a good book way back when called Regional Advantage, about why Boston lost the tech industry to Silicon Valley. Boston had a very vertical business model, companies like DEC designing and making their own chips for their own computers. Then you had Microsoft and Intel and IBM coming along with a horizontal approach and winning that way.

Alam: You have more horizontalization, I guess is the word, happening with the fabless foundry model as well. With that model and foundries becoming available, more and more fabless companies got started. In a way, the cycle is repeating. I started my career at Motorola in semiconductors. At the time, all the tech companies of that era had their own semiconductor division. They were all vertically integrated. I worked at Freescale, which came out of Motorola. NXP came out of Philips. Infineon came from Siemens. All the tech leaders of that time had their own semiconductor division.

Because of the capex requirements and the cycles of the industry, they spun off a lot of these semiconductor operations into independent companies. But now we’re back to the same thing. All the tech companies of our time, the major tech companies, whether it’s Google or Meta or Amazon or Microsoft, they’re designing their own chips again. Very vertically integrated. Except the benefit they have now is they don’t have to have the fab. But at least they’re going vertically integrated up to the point of designing the chip. Maybe not manufacturing it, but designing it. Who knows? In the future they might manufacture as well. You have a little bit of verticalization happening now as well.

VentureBeat: I do wonder what explains Apple, though.

Alam: Yeah, they’re entirely vertically integrated. That’s been their philosophy for a long time. They’ve applied that to chips as well.

VentureBeat: But they get the benefit of using TSMC or Samsung.

Alam: Exactly. They still don’t have to have the fab, because the foundry model makes it easier to be vertically integrated. In the past, in the last cycle I was talking about with Motorola and Philips and Siemens, if they wanted to be vertically integrated, they had to build a fab. It was very difficult. Now these companies can be vertically integrated up to a certain level, but they don’t have to have manufacturing.

When Apple started designing their own chips–if you notice, when they were using chips from suppliers, like at the time of the original iPhone launch, they never talked about chips. They talked about the apps, the user interface. Then, when they started designing their own chips, the star of the show became, “Hey, this phone is using the A17 now!” It made other industry leaders realize that to truly differentiate, you want to have your own chip as well. You see a lot of other players, even in other areas, designing their own chips.

VentureBeat: Is there a strategic recommendation that comes out of this in some way? If you step outside into the regulatory realm, the regulators are looking at vertical companies as too concentrated. They’re looking closely at something like Apple, as to whether or not their store should be broken up. The ability to use one monopoly as support for another monopoly becomes anti-competitive.

Alam: I’m not a regulatory expert, so I can’t comment on that one. But there’s a difference. We were talking about vertical integration of technology. You’re talking about vertical integration of the business model, which is a bit different.

VentureBeat: I remember an Imperial College professor predicting that this horizontal wave of AI was going to boost the whole world’s GDP by 10 percent in 2032, something like that.

Alam: I can’t comment on the specific research. But it’s going to help the semiconductor industry quite a bit. Everyone keeps talking about a few major companies designing and coming out with AI chips. For every AI chip, you need all the other surrounding chips as well. It’s going to help the industry grow overall. Obviously we talk about how AI is going to be pervasive across so many other industries, creating productivity gains. That will have an impact on GDP. How much, how soon, we’ll have to see.

VentureBeat: Things like the metaverse–that seems like a horizontal opportunity across a bunch of different industries, getting into virtual online worlds. How would you most easily go about building ambitious projects like that, though? Is it the vertical companies like Apple that can take the first opportunity to build something like that, or is it spread out across industries, with someone like Microsoft as just one layer?

Alam: We can’t assume that a vertically integrated company will have an advantage in something like that. Horizontal companies, if they have the right level of ecosystem partnerships, they can do something like that as well. It’s hard to make a definitive statement, that only vertically integrated companies can build a new technology like this. They obviously have some benefits. But if Microsoft, like in your example, has good ecosystem partnerships, they could also succeed. Something like the metaverse, we’ll see companies using it in different ways. We’ll see different kinds of user interfaces as well.

VentureBeat: The Apple Vision Pro is an interesting product to me. It could be transformative, but then they come out with it at $3500. If you apply Moore’s Law to that, it could be 10 years before it’s down to $300. Can we expect the kind of progress that we’ve come to expect over the last 30 years or so?

Alam: All of these kinds of products, these emerging technology products, when they initially come out they’re obviously very expensive. The volume isn’t there. Interest from the public and consumer demand drives up volume and drives down cost. If you don’t ever put it out there, even at that higher price point, you don’t get a sense of what the volume is going to be like and what consumer expectations are going to be. You can’t put a lot of effort into driving down the cost until you get that. They both help each other. The technology getting out there helps educate consumers on how to use it, and once we see the expectation and can increase volume, the price goes down.

The other benefit of putting it out there is understanding different use cases. The product managers at the company may think the product has, say, these five use cases, or these 10 use cases. But you can’t think of all the possible use cases. People might start using it in this direction, creating demand through something you didn’t expect. You might run into these 10 new use cases, or 30 use cases. That will drive volume again. It’s important to get a sense of market adoption, and also get a sense of different use cases.

VentureBeat: You never know what consumer desire is going to be until it’s out there.

Alam: You have some sense of it, obviously, because you invested in it and put the product out there. But you don’t fully appreciate what’s possible until it hits the market. Then the volume and the rollout is driven by consumer acceptance and demand.

VentureBeat: Do you think there are enough levers for chip designers to pull to deliver the compounding benefits of Moore’s Law?

Alam: Moore’s Law in the classic sense, just shrinking the die, is going to hit its physical limits. We’ll have diminishing returns. But in a broader sense, Moore’s Law is still applicable. You get the efficiency by doing chiplets, for example, or improving packaging, things like that. The chip designers are still squeezing more efficiency out. It may not be in the classic sense that we’ve seen over the past 30 years or so, but through other methods.

VentureBeat: So you’re not overly pessimistic?

Alam: When we started seeing that the classic Moore’s law, shrinking the die, would slow down, and the costs were becoming prohibitive–the wafer for 5nm is super expensive compared to legacy nodes. Building the fabs costs twice as much. Building a really cutting-edge fab is costing significantly more. But then you see advancements on the packaging side, with chiplets and things like that. AI will help with all of this as well.

Accenture has more than 743,000 people serving up consulting expertise on technology to clients in more than 120 countries. I met with one of them at CES 2024, the big tech trade show in Las Vegas, and had a conversation about semiconductor chips, the foundation of our tech economy.

Syed Alam, Accenture‘s semiconductor lead, was one of many people at the show talking about the impact of AI on a major tech industry. He said that one of these days we’ll be talking about chips with trillions of transistors on them. No single engineer will be able to design them all, and so AI is going to have to help with that task.

According to Accenture research, generative AI has the potential to impact 44% of all working hours
across industries, enable productivity enhancements across 900 different types of jobs and create $6 to
$8 trillion in global economic value.

It’s no secret that Moore’s Law has been slowing down. Back in 1965, former Intel CEO Gordon Moore predicted that chip manufacturing advances were proceeding so fast that the industry would be able to double the number of components on a chip every couple of years.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

For decades, that law held true, as a metronome for the chip industry that brought enormous economic benefits to society as everything in the world became electronic. But the slowdown means that progress is no longer guaranteed.

This is why the companies leading the race for progress in chips — like Nvidia — are valued at over $1 trillion. And the interesting thing is that as chips get faster and smarter, they’re going to be used to make AI smarter and cheaper and more accessible.

A supercomputer used to train ChatGPT has over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server. The hundreds of millions of queries of ChatGPT consumes about one GigaWatt-hour each day, which is about daily energy consumption of 33,000 US households. Building autonomous cars requires more than 2,000 chips, more than double the number of chips used in regular cars. These are tough problems to solve, and they will be solvable thanks to the dynamic vortex of AI and semiconductor advances.

Alam talked about the impact of AI as well as software changes on hardware and chips. Here’s an edited transcript of our interview.

VentureBeat: Tell me what you’re interested in now.

Syed Alam is head of the semiconductor practice at Accenture.

Syed Alam: I’m hosting a panel discussion tomorrow morning. The topic is the hard part of AI, hardware and chips. Talking about how they’re enabling AI. Obviously the people who are doing the hardware and chips believe that’s the difficult part. People doing software believe that’s the difficult part. We’re going to take the view, most likely–I have to see what view my fellow panelists take. Most likely we’ll end up in a situation where the hardware independently or the software independently, neither is the difficult part. It’s the integration of hardware and software that’s the difficult part.

You’re seeing the companies that are successful–they’re the leaders in hardware, but also invested heavily in software. They’ve done a very good job of hardware and software integration. There are hardware or chip companies who are catching up on the chip side, but they have a lot of work to do on the software side. They’re making progress there. Obviously the software companies, companies writing algorithms and things like that, they’re being enabled by that progress. That’s a quick outline for the talk tomorrow.

VentureBeat: It makes me think about Nvidia and DLSS (deep learning super sampling) technology, enabled by AI. Used in graphics chips, they use AI to estimate the likelihood of the next pixel they’re going to have to draw based on the last one they had to draw.

Alam: Along the same lines, the success for Nvidia is obviously–they have a very powerful processor in this space. But at the same time, they’ve invested heavily in the CUDA architecture and software for many years. It’s the tight integration that is enabling what they’re doing. That’s making Nvidia the current leader in this space. They have a very powerful, robust chip and very tight integration with their software.

VentureBeat: They were getting very good percentage gains from software updates for this DLSS AI technology, as opposed to sending the chip back to the factory another time.

Alam: That’s the beauty of a good software architecture. As I said, they’ve invested heavily over so many years. A lot of the time you don’t have to do–if you have tight integration with software, and the hardware is designed that way, then a lot of those updates can be done in software. You’re not spinning something new out every time a slight update is needed. That’s traditionally been the mantra in chip design. We’ll just spin out new chips. But now with the integrated software, a lot of those updates can be done purely in software.

VentureBeat: Have you seen a lot of changes happening among individual companies because of AI already?

AI is going to touch every industry, including semiconductors.

Alam: At the semiconductor companies, obviously, we’re seeing them design more powerful chips, but at the same time also looking at software as a key differentiator. You saw AMD announce the acquisition of AI software companies. You’re seeing companies not only investing in hardware, but at the same time also investing in software, especially for applications like AI where that’s very important.

VentureBeat: Back to Nvidia, that was always an advantage they had over some of the others. AMD was always very hardware-focused. Nvidia was investing in software.

Alam: Exactly. They’ve been investing in Cuda for a long time. They’ve done well on both fronts. They came up with a very robust chip, and at the same time the benefits of investing in software for a long period came along around the same time. That’s made their offering very powerful.

VentureBeat: I’ve seen some other companies coming up with–Synopsis, for example, they just announced that they’re going to be selling some chips. Designing their own chips as opposed to just making chip design software. It was interesting in that it starts to mean that AI is designing chips as much as humans are designing them.

Alam: We’ll see that more and more. Just like AI is writing code. You can translate that now into AI playing a key role in designing chips as well. It may not design the entire chip, but a lot of the first mile, or maybe just the last mile of customization is done by human engineers. You’ll see the same thing applied to chip design, AI playing a role in design. At the same time, in manufacturing AI is playing a key role already, and it’s going to play a lot more of a role. We saw some of the foundry companies announcing that they’ll have a fab in a few years where there won’t be any humans. The leading fabs already have a very limited number of humans involved.

VentureBeat: I always felt like we’d eventually hit a wall in the productivity of engineers designing things. How many billions of transistors would one engineer be responsible for creating? The path leads to too much complexity for the human mind, too many tasks for one person to do without automation. The same thing is happening in game development, which I also cover a lot. There were 2,000 people working on a game called Red Dead Redemption 2, and that came out in 2018. Now they’re on the next version of Grand Theft Auto, with thousands of developers responsible for the game. It feels like you have to hit a wall with a project that complex.

This supercomputer uses Nvidia's Grace Hopper chips.
This supercomputer uses Nvidia’s Grace Hopper chips.

Alam: No one engineer, as you know, actually puts together all these billions of transistors. It’s putting Lego blocks together. Every time you design a chip, you don’t start by putting every single transistor together. You take pieces and put them together. But having said that, a lot of that work will be enabled by AI as well. Which Lego blocks to use? Humans might decide that, but AI could help, depending on the design. It’s going to become more important as chips get more complicated and you get more transistors involved. Some of these things become almost humanly impossible, and AI will take over.

If I remember correctly, I saw a road map from TSMC–I think they were saying that by 2030, they’ll have chips with a trillion transistors. That’s coming. That won’t be possible unless AI is involved in a major way.

VentureBeat: The path that people always took was that when you had more capacity to make something bigger and more complex, they always made it more ambitious. They never took the path of making it less complex or smaller. I wonder if the less complex path is actually the one that starts to get a little more interesting.

Alam: The other thing is, we talked about using AI in designing chips. AI is also going to be used for manufacturing chips. There are already AI techniques being used for yield improvement and things like that. As chips become more and more complicated, talking about many billions or a trillion transistors, the manufacturing of those dies is going to become even more complicated. For manufacturing AI is going to be used more and more. Designing the chip, you encounter physical limitations. It could take 12 to 18 weeks for manufacturing. But to increase throughput, increase yield, improve quality, there’s going to be more and more AI techniques in use.

VentureBeat: You have compounding effects in AI’s impact.

How will AI change the chip industry?

Alam: Yes. And again, going back to the point I made earlier, AI will be used to make more AI chips in a more efficient manner.

VentureBeat: Brian Comiskey gave one of the opening tech trends talks here. He’s one of the researchers at the CTA. He said that a horizontal wave of AI is going to hit every industry. The interesting question then becomes, what kind of impact does that have? What compound effects, when you change everything in the chain?

Alam: I think it will have the same kind of compounding effect that compute had. Computers were used initially for mathematical operations, those kinds of things. Then computing started to impact pretty much all of industry. AI is a different kind of technology, but it has a similar impact, and will be as pervasive.

That brings up another point. You’ll see more and more AI on the edge. It’s physically impossible to have everything done in data centers, because of power consumption, cooling, all of those things. Just as we do compute on the edge now, sensing on the edge, you’ll have a lot of AI on the edge as well.

VentureBeat: People say privacy is going to drive a lot of that.

Alam: A lot of factors will drive it. Sustainability, power consumption, latency requirements. Just as you expect compute processing to happen on the edge, you’ll expect AI on the edge as well. You can draw some parallels to when we first had the CPU, the main processor. All kinds of compute was done by the CPU. Then we decided that for graphics, we’d make a GPU. CPUs are all-purpose, but for graphics let’s make a separate ASIC.

Now, similarly, we have the GPU as the AI chip. All AI is running through that chip, a very powerful chip, but soon we’ll say, “For this neural network, let’s use this particular chip. For visual identification let’s use this other chip.” They’ll be super optimized for that particular use, especially on the edge. Because they’re optimized for that task, power consumption is lower, and they’ll have other advantages. Right now we have, in a way, centralized AI. We’re going toward more distributed AI on the edge.

VentureBeat: I remember a good book way back when called Regional Advantage, about why Boston lost the tech industry to Silicon Valley. Boston had a very vertical business model, companies like DEC designing and making their own chips for their own computers. Then you had Microsoft and Intel and IBM coming along with a horizontal approach and winning that way.

Alam: You have more horizontalization, I guess is the word, happening with the fabless foundry model as well. With that model and foundries becoming available, more and more fabless companies got started. In a way, the cycle is repeating. I started my career at Motorola in semiconductors. At the time, all the tech companies of that era had their own semiconductor division. They were all vertically integrated. I worked at Freescale, which came out of Motorola. NXP came out of Philips. Infineon came from Siemens. All the tech leaders of that time had their own semiconductor division.

Because of the capex requirements and the cycles of the industry, they spun off a lot of these semiconductor operations into independent companies. But now we’re back to the same thing. All the tech companies of our time, the major tech companies, whether it’s Google or Meta or Amazon or Microsoft, they’re designing their own chips again. Very vertically integrated. Except the benefit they have now is they don’t have to have the fab. But at least they’re going vertically integrated up to the point of designing the chip. Maybe not manufacturing it, but designing it. Who knows? In the future they might manufacture as well. You have a little bit of verticalization happening now as well.

VentureBeat: I do wonder what explains Apple, though.

Alam: Yeah, they’re entirely vertically integrated. That’s been their philosophy for a long time. They’ve applied that to chips as well.

VentureBeat: But they get the benefit of using TSMC or Samsung.

A close-up of the Apple Vision Pro.
A close-up of the Apple Vision Pro.

Alam: Exactly. They still don’t have to have the fab, because the foundry model makes it easier to be vertically integrated. In the past, in the last cycle I was talking about with Motorola and Philips and Siemens, if they wanted to be vertically integrated, they had to build a fab. It was very difficult. Now these companies can be vertically integrated up to a certain level, but they don’t have to have manufacturing.

When Apple started designing their own chips–if you notice, when they were using chips from suppliers, like at the time of the original iPhone launch, they never talked about chips. They talked about the apps, the user interface. Then, when they started designing their own chips, the star of the show became, “Hey, this phone is using the A17 now!” It made other industry leaders realize that to truly differentiate, you want to have your own chip as well. You see a lot of other players, even in other areas, designing their own chips.

VentureBeat: Is there a strategic recommendation that comes out of this in some way? If you step outside into the regulatory realm, the regulators are looking at vertical companies as too concentrated. They’re looking closely at something like Apple, as to whether or not their store should be broken up. The ability to use one monopoly as support for another monopoly becomes anti-competitive.

Alam: I’m not a regulatory expert, so I can’t comment on that one. But there’s a difference. We were talking about vertical integration of technology. You’re talking about vertical integration of the business model, which is a bit different.

VentureBeat: I remember an Imperial College professor predicting that this horizontal wave of AI was going to boost the whole world’s GDP by 10 percent in 2032, something like that.

Alam: I can’t comment on the specific research. But it’s going to help the semiconductor industry quite a bit. Everyone keeps talking about a few major companies designing and coming out with AI chips. For every AI chip, you need all the other surrounding chips as well. It’s going to help the industry grow overall. Obviously we talk about how AI is going to be pervasive across so many other industries, creating productivity gains. That will have an impact on GDP. How much, how soon, we’ll have to see.

VentureBeat: Things like the metaverse–that seems like a horizontal opportunity across a bunch of different industries, getting into virtual online worlds. How would you most easily go about building ambitious projects like that, though? Is it the vertical companies like Apple that can take the first opportunity to build something like that, or is it spread out across industries, with someone like Microsoft as just one layer?

Alam: We can’t assume that a vertically integrated company will have an advantage in something like that. Horizontal companies, if they have the right level of ecosystem partnerships, they can do something like that as well. It’s hard to make a definitive statement, that only vertically integrated companies can build a new technology like this. They obviously have some benefits. But if Microsoft, like in your example, has good ecosystem partnerships, they could also succeed. Something like the metaverse, we’ll see companies using it in different ways. We’ll see different kinds of user interfaces as well.

VentureBeat: The Apple Vision Pro is an interesting product to me. It could be transformative, but then they come out with it at $3500. If you apply Moore’s Law to that, it could be 10 years before it’s down to $300. Can we expect the kind of progress that we’ve come to expect over the last 30 years or so?

Can AI bring people and industries closer together?

Alam: All of these kinds of products, these emerging technology products, when they initially come out they’re obviously very expensive. The volume isn’t there. Interest from the public and consumer demand drives up volume and drives down cost. If you don’t ever put it out there, even at that higher price point, you don’t get a sense of what the volume is going to be like and what consumer expectations are going to be. You can’t put a lot of effort into driving down the cost until you get that. They both help each other. The technology getting out there helps educate consumers on how to use it, and once we see the expectation and can increase volume, the price goes down.

The other benefit of putting it out there is understanding different use cases. The product managers at the company may think the product has, say, these five use cases, or these 10 use cases. But you can’t think of all the possible use cases. People might start using it in this direction, creating demand through something you didn’t expect. You might run into these 10 new use cases, or 30 use cases. That will drive volume again. It’s important to get a sense of market adoption, and also get a sense of different use cases.

VentureBeat: You never know what consumer desire is going to be until it’s out there.

Alam: You have some sense of it, obviously, because you invested in it and put the product out there. But you don’t fully appreciate what’s possible until it hits the market. Then the volume and the rollout is driven by consumer acceptance and demand.

VentureBeat: Do you think there are enough levers for chip designers to pull to deliver the compounding benefits of Moore’s Law?

Alam: Moore’s Law in the classic sense, just shrinking the die, is going to hit its physical limits. We’ll have diminishing returns. But in a broader sense, Moore’s Law is still applicable. You get the efficiency by doing chiplets, for example, or improving packaging, things like that. The chip designers are still squeezing more efficiency out. It may not be in the classic sense that we’ve seen over the past 30 years or so, but through other methods.

VentureBeat: So you’re not overly pessimistic?

Alam: When we started seeing that the classic Moore’s law, shrinking the die, would slow down, and the costs were becoming prohibitive–the wafer for 5nm is super expensive compared to legacy nodes. Building the fabs costs twice as much. Building a really cutting-edge fab is costing significantly more. But then you see advancements on the packaging side, with chiplets and things like that. AI will help with all of this as well.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Dean Takahashi
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!