AI & RoboticsNews

Intel CTO highlights open and secure advances for AI deployment

On the second day of the Intel Innovation 2023 event, Intel CTO Greg Lavender detailed how Intel’s developer-first, open ecosystem philosophy is enabling widespread accessibility to artificial intelligence (AI) opportunities.

Lavender said Intel wants to address challenges faced by developers in deploying AI solutions and outlined the company’s approach grounded in openness, choice, trust, and security.

He acknowledged that developers play a crucial role in leveraging AI to meet diverse industry needs, both now and in the future. To ensure AI is accessible to all, Intel believes developers should have the freedom to choose hardware and software that best suits their requirements.

“The developer community is the catalyst helping industries leverage AI to meet their diverse needs – both today and into the future,” Lavender said. “AI can and should be accessible to everyone to deploy responsibly. If developers are limited in their choice of hardware and software, the range of use cases for global-scale AI adoption will be constrained and likely limited in the societal value they are capable of delivering.”

By providing tools that streamline the development of secure AI applications and reduce the investment necessary to maintain and scale those solutions, Intel aims to empower developers to bring AI everywhere.

On the graphics front, Lavender said 63,744 Argonne Nationa Laboratory’s Aurora supercomputer will use Intel Max Series GPUs, the largest GPU cluster in the world.

During Lavender’s keynote address, he highlighted Intel’s focus on end-to-end security. This includes Intel’s Transparent Supply Chain, which verifies hardware and firmware integrity, and confidential computing to safeguard sensitive data in memory.

On a panel yesterday, Lavender warned that phishing scammers would soon be able to stage a Zoom call where someone who looks like you and talks like you tries to convince your parents to immediately wire money to a bank account — and they won’t be able to tell the difference. That’s the clear downside.

On the upside, Intel CEO Pat Gelsinger PCs would have enough AI computing at the edge to be able to do their own AI processing on a single device or small group of devices. And they could make all of your calls and recordings searchable and immediately available to an AI assistant to give you help when you need it.

Intel announced the general availability of a new attestation service, the first offering in the Intel Trust Authority portfolio of security software and services. This service provides an independent assessment of trusted execution environment integrity and policy enforcement, enhancing security in multi-cloud, hybrid, on-premises, and edge environments.

Lavender said Intel recognizes the limitations businesses face when implementing AI solutions, such as a lack of expertise, resource constraints, and costly proprietary platforms. To address these challenges, he said Intel is committed to driving an open ecosystem that allows for easy deployment across multiple architectures.

As a founding member of the Linux Foundation’s Unified Acceleration Foundation (UXL), Intel aims to simplify application development for cross-platform deployment. Intel will contribute its oneAPI programming model to the UXL Foundation, enabling code to be written once and deployed across various computing architectures.

Intel is also collaborating with leading software vendors Red Hat, Canonical, and SUSE to provide Intel-optimized distributions of their enterprise software releases. This collaboration ensures optimized performance for the latest Intel architectures. Furthermore, Intel continues to contribute to AI and machine-learning tools and frameworks such as PyTorch and TensorFlow.

To help developers scale performance efficiently, Intel Granulate is introducing Auto Pilot for Kubernetes pod resource rightsizing. This capacity-optimization tool offers automatic and continuous capacity management recommendations for Kubernetes users. In addition, Intel Granulate is adding autonomous orchestration capabilities for Databricks workloads, delivering significant cost reduction and processing time improvements.

Recognizing the need to protect AI models, data, and platforms from tampering and theft, Intel plans to develop an application-specific integrated circuit (ASIC) accelerator to reduce performance overhead associated with fully homomorphic encryption (FHE). Intel also announced the upcoming launch of a beta version of an encrypted computing software toolkit, which will enable researchers, developers, and user communities to experiment with FHE coding.

We’re thrilled to announce the return of GamesBeat Next, hosted in San Francisco this October, where we will explore the theme of “Playing the Edge.” Apply to speak here and learn more about sponsorship opportunities here. At the event, we will also announce 25 top game startups as the 2024 Game Changers. Apply or nominate today!


On the second day of the Intel Innovation 2023 event, Intel CTO Greg Lavender detailed how Intel’s developer-first, open ecosystem philosophy is enabling widespread accessibility to artificial intelligence (AI) opportunities.

Lavender said Intel wants to address challenges faced by developers in deploying AI solutions and outlined the company’s approach grounded in openness, choice, trust, and security.

He acknowledged that developers play a crucial role in leveraging AI to meet diverse industry needs, both now and in the future. To ensure AI is accessible to all, Intel believes developers should have the freedom to choose hardware and software that best suits their requirements.

“The developer community is the catalyst helping industries leverage AI to meet their diverse needs – both today and into the future,” Lavender said. “AI can and should be accessible to everyone to deploy responsibly. If developers are limited in their choice of hardware and software, the range of use cases for global-scale AI adoption will be constrained and likely limited in the societal value they are capable of delivering.”

Event

GamesBeat Next 2023

Join the GamesBeat community in San Francisco this October 24-25. You’ll hear from the brightest minds within the gaming industry on latest developments and their take on the future of gaming.



Learn More

By providing tools that streamline the development of secure AI applications and reduce the investment necessary to maintain and scale those solutions, Intel aims to empower developers to bring AI everywhere.

On the graphics front, Lavender said 63,744 Argonne Nationa Laboratory’s Aurora supercomputer will use Intel Max Series GPUs, the largest GPU cluster in the world.

Intel Trust Authority
Intel Trust Authority

During Lavender’s keynote address, he highlighted Intel’s focus on end-to-end security. This includes Intel’s Transparent Supply Chain, which verifies hardware and firmware integrity, and confidential computing to safeguard sensitive data in memory.

On a panel yesterday, Lavender warned that phishing scammers would soon be able to stage a Zoom call where someone who looks like you and talks like you tries to convince your parents to immediately wire money to a bank account — and they won’t be able to tell the difference. That’s the clear downside.

On the upside, Intel CEO Pat Gelsinger PCs would have enough AI computing at the edge to be able to do their own AI processing on a single device or small group of devices. And they could make all of your calls and recordings searchable and immediately available to an AI assistant to give you help when you need it.

Intel announced the general availability of a new attestation service, the first offering in the Intel Trust Authority portfolio of security software and services. This service provides an independent assessment of trusted execution environment integrity and policy enforcement, enhancing security in multi-cloud, hybrid, on-premises, and edge environments.

Addressing challenges

Intel CTO Greg Lavender speaks at Intel Innovation 2023 on day one.

Lavender said Intel recognizes the limitations businesses face when implementing AI solutions, such as a lack of expertise, resource constraints, and costly proprietary platforms. To address these challenges, he said Intel is committed to driving an open ecosystem that allows for easy deployment across multiple architectures.

As a founding member of the Linux Foundation’s Unified Acceleration Foundation (UXL), Intel aims to simplify application development for cross-platform deployment. Intel will contribute its oneAPI programming model to the UXL Foundation, enabling code to be written once and deployed across various computing architectures.

Intel is also collaborating with leading software vendors Red Hat, Canonical, and SUSE to provide Intel-optimized distributions of their enterprise software releases. This collaboration ensures optimized performance for the latest Intel architectures. Furthermore, Intel continues to contribute to AI and machine-learning tools and frameworks such as PyTorch and TensorFlow.

Intel CTO Greg Lavender said Intel is working with more than 100 AI startups.

To help developers scale performance efficiently, Intel Granulate is introducing Auto Pilot for Kubernetes pod resource rightsizing. This capacity-optimization tool offers automatic and continuous capacity management recommendations for Kubernetes users. In addition, Intel Granulate is adding autonomous orchestration capabilities for Databricks workloads, delivering significant cost reduction and processing time improvements.

Recognizing the need to protect AI models, data, and platforms from tampering and theft, Intel plans to develop an application-specific integrated circuit (ASIC) accelerator to reduce performance overhead associated with fully homomorphic encryption (FHE). Intel also announced the upcoming launch of a beta version of an encrypted computing software toolkit, which will enable researchers, developers, and user communities to experiment with FHE coding.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.


Author: Dean Takahashi
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!