Blink, and you might just miss the invention of yet another programming language. The old joke goes that programmers spend 20% of their time coding and 80% of their time deciding what language to use. In fact, there are so many programming languages out there that we are not sure how many we actually have. It’s probably safe to say there are at least 700 programming languages lingering in various states of use and misuse. There is always room for more improvement, it seems.
As AI keeps pushing the envelope, it’s also pushing the limits of our most popular programming languages, Java, C and Python. And, like everything else, AI is another problem just begging for a new programming language to solve it. This time however, history suggests it might not be such a bad idea.
It is not the first time AI has driven a wave of new programming languages. The 1970s and 1980s saw a golden age of AI-focused languages like LISP and Prolog, which introduced groundbreaking concepts such as symbolic processing and logic programming. Then as now, AI was the hot topic.
Notably, the LISP language profoundly impacted the future of software by introducing the functional programming paradigm, ultimately influencing the design of modern languages like Python, Haskell and Scala. LISP was also one of the first languages to implement dynamic typing, where types are associated with values rather than variables, allowing for more flexibility and ease of prototyping. It also introduced garbage collection, which automatically reclaims memory no longer in use, a feature many modern programming languages, such as Java, Python and JavaScript, have adopted. It is fair to say that, without LISP, we would likely not be where we are today.
When the AI field experienced a long period of diminished funding and interest in the 1970s and 1980s, the so-called “AI Winters”, the focus on specialized AI languages like LISP began to fade. Simultaneously, the rapid advancement of general-purpose computing led to the rise of general-purpose languages like C, which offered better performance and portability for a wide range of applications, including systems programming and numerical computations.
Now, history seems to be repeating itself, and AI is once again driving the invention of new programming languages to solve its thorny problems. The intense numerical computations and parallel processing required by modern AI algorithms highlight the need for languages that can effectively bridge the gap between abstraction and effectively utilizing the underlying hardware
Arguably, the trend started with APIs and frameworks like TensorFlow’s Tensor Computation Syntax, Julia, along with revived interests in array-oriented languages like APL and J, which offer domain-specific constructs that align with the mathematical foundations of machine learning and neural networks. These projects tried to reduce the overhead of translating mathematical concepts into general-purpose code, allowing researchers and developers to focus more on the core AI logic and less on low-level implementation details.
More recently, a new wave of AI-first languages has emerged, designed from the ground up to address the specific needs of AI development. Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow’s machine learning capabilities. These languages represent a growing trend towards specialized tools and abstractions for AI development.
While general-purpose languages like Python, C++, and Java remain popular in AI development, the resurgence of AI-first languages signifies a recognition that AI’s unique demands require specialized languages tailored to the domain’s specific needs, much like the early days of AI research that gave rise to languages like LISP.
Python, for example, has long been the favorite among modern AI developers for its simplicity, versatility, and extensive ecosystem. However, its performance limitations have been a major drawback for many AI use cases.
Training deep learning models in Python can be painfully slow—we’re talking DMV slow, waiting-for-the-cashier-to-make-correct-change slow. Libraries like TensorFlow and PyTorch help by using C++ under the hood, but Python’s still a bottleneck, especially when preprocessing data and managing complex training workflows.
Inference latency is critical in real-time AI applications like autonomous driving or live video analysis. However, Python’s Global Interpreter Lock (GIL) prevents multiple native threads from executing Python bytecodes simultaneously, leading to suboptimal performance in multi-threaded environments.
In large-scale AI applications, efficient memory management is crucial to maximize the use of available resources. Python’s dynamic typing and automatic memory management can increase memory usage and fragmentation. Low-level control over memory allocation, as seen in languages like C++ and Rust, allows for more efficient use of hardware resources, improving the overall performance of AI systems.
Deploying AI models in production environments, especially on edge devices with limited computational resources, can be challenging with Python. Python’s interpreted nature and runtime dependencies can lead to increased resource consumption and slower execution speeds. Compiled languages like Go or Rust, which offer lower runtime overhead and better control over system resources, are often preferred for deploying AI models on edge devices.
Mojo is a new programming language that promises to bridge the gap between Python’s ease of use and the lightning-fast performance required for cutting-edge AI applications. Modular, a company founded by Chris Lattner, the creator of the Swift programming language and LLVM compiler infrastructure, created the new language. Mojo is a superset of Python, which means developers can leverage their existing Python knowledge and codebases while unlocking unprecedented performance gains. Mojo’s creators claim that it can be up to 35,000 times faster than Python code.
At the heart of Mojo’s design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details.
One of Mojo’s key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo’s performance enhancements.
Mojo introduces several features that set it apart from Python. It supports static typing, which can help catch errors early in development and enable more efficient compilation. However, developers can still opt for dynamic typing when needed, providing flexibility and ease of use. The language introduces new keywords, such as “var” and “let,” which provide different levels of mutability. Mojo also includes a new “fn” keyword for defining functions within the strict type system.
Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation. These features contribute to Mojo’s performance optimizations and help developers write more efficient and error-free code.
One of Mojo’s most exciting aspects is its potential to accelerate AI development. With its ability to compile to highly optimized machine code that can run at native speeds on both CPUs and GPUs, Mojo enables developers to write complex AI applications without sacrificing performance. The language includes high-level abstractions for data parallelism, task parallelism, and pipelining, allowing developers to express sophisticated parallel algorithms with minimal code.
Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.
According to Mojo’s creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August.
Despite its impressive performance and potential, Mojo’s adoption might have stalled initially due to its proprietary status.
However, Modular recently decided to open-source Mojo’s core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo’s adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.
Developers can now explore Mojo’s inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.
The permissive Apache License allows developers to freely use, modify, and distribute Mojo, encouraging the growth of a vibrant ecosystem around the language. As more developers build tools, libraries, and frameworks for Mojo, the language’s appeal will grow, attracting potential users who can benefit from rich resources and support. Mojo’s compatibility with other open-source licenses, such as GPL2, enables seamless integration with other open-source projects.
While Mojo is a promising new entrant, it’s not the only language trying to become the go-to choice for AI development. Several other emerging languages are also designed from the ground up with AI workloads in mind.
One notable example was Swift for TensorFlow, an ambitious project to bring the powerful language features of Swift to machine learning. Developed by a collaboration between Google and Apple, Swift for TensorFlow allowed developers to express complex machine learning models using native Swift syntax, with the added benefits of static typing, automatic differentiation, and XLA compilation for high-performance execution on accelerators. Google unfortunately stopped development and the project is now archived, which shows just how difficult it can be to get user traction with a new language development, even for a giant like Google.
Google has since increasingly focused on JAX, a library for high-performance numerical computing and machine learning (ML). JAX is a Python library that provides high-performance numerical computing and machine learning capabilities, supporting automatic differentiation, XLA compilation and efficient use of accelerators. While not a standalone language, JAX extends Python with a more declarative and functional style that aligns well with the mathematical foundations of machine learning.
The latest addition is Bend, a massively parallel, high-level programming language that compiles a Python-like language directly into GPU kernels. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations—no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI—a tool for every occasion.
These languages leverage modern language features and strong type systems to enable expressive and safe coding of AI algorithms while still providing high-performance execution on parallel hardware.
The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance.
The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind.
This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more. The future of AI development and computing itself are being reshaped by the languages and tools we create today.
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.
Blink, and you might just miss the invention of yet another programming language. The old joke goes that programmers spend 20% of their time coding and 80% of their time deciding what language to use. In fact, there are so many programming languages out there that we are not sure how many we actually have. It’s probably safe to say there are at least 700 programming languages lingering in various states of use and misuse. There is always room for more improvement, it seems.
As AI keeps pushing the envelope, it’s also pushing the limits of our most popular programming languages, Java, C and Python. And, like everything else, AI is another problem just begging for a new programming language to solve it. This time however, history suggests it might not be such a bad idea.
In the beginning
It is not the first time AI has driven a wave of new programming languages. The 1970s and 1980s saw a golden age of AI-focused languages like LISP and Prolog, which introduced groundbreaking concepts such as symbolic processing and logic programming. Then as now, AI was the hot topic.
Notably, the LISP language profoundly impacted the future of software by introducing the functional programming paradigm, ultimately influencing the design of modern languages like Python, Haskell and Scala. LISP was also one of the first languages to implement dynamic typing, where types are associated with values rather than variables, allowing for more flexibility and ease of prototyping. It also introduced garbage collection, which automatically reclaims memory no longer in use, a feature many modern programming languages, such as Java, Python and JavaScript, have adopted. It is fair to say that, without LISP, we would likely not be where we are today.
When the AI field experienced a long period of diminished funding and interest in the 1970s and 1980s, the so-called “AI Winters”, the focus on specialized AI languages like LISP began to fade. Simultaneously, the rapid advancement of general-purpose computing led to the rise of general-purpose languages like C, which offered better performance and portability for a wide range of applications, including systems programming and numerical computations.
The return of AI-first languages
Now, history seems to be repeating itself, and AI is once again driving the invention of new programming languages to solve its thorny problems. The intense numerical computations and parallel processing required by modern AI algorithms highlight the need for languages that can effectively bridge the gap between abstraction and effectively utilizing the underlying hardware
Arguably, the trend started with APIs and frameworks like TensorFlow’s Tensor Computation Syntax, Julia, along with revived interests in array-oriented languages like APL and J, which offer domain-specific constructs that align with the mathematical foundations of machine learning and neural networks. These projects tried to reduce the overhead of translating mathematical concepts into general-purpose code, allowing researchers and developers to focus more on the core AI logic and less on low-level implementation details.
More recently, a new wave of AI-first languages has emerged, designed from the ground up to address the specific needs of AI development. Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow’s machine learning capabilities. These languages represent a growing trend towards specialized tools and abstractions for AI development.
While general-purpose languages like Python, C++, and Java remain popular in AI development, the resurgence of AI-first languages signifies a recognition that AI’s unique demands require specialized languages tailored to the domain’s specific needs, much like the early days of AI research that gave rise to languages like LISP.
The limitations of Python for AI
Python, for example, has long been the favorite among modern AI developers for its simplicity, versatility, and extensive ecosystem. However, its performance limitations have been a major drawback for many AI use cases.
Training deep learning models in Python can be painfully slow—we’re talking DMV slow, waiting-for-the-cashier-to-make-correct-change slow. Libraries like TensorFlow and PyTorch help by using C++ under the hood, but Python’s still a bottleneck, especially when preprocessing data and managing complex training workflows.
Inference latency is critical in real-time AI applications like autonomous driving or live video analysis. However, Python’s Global Interpreter Lock (GIL) prevents multiple native threads from executing Python bytecodes simultaneously, leading to suboptimal performance in multi-threaded environments.
In large-scale AI applications, efficient memory management is crucial to maximize the use of available resources. Python’s dynamic typing and automatic memory management can increase memory usage and fragmentation. Low-level control over memory allocation, as seen in languages like C++ and Rust, allows for more efficient use of hardware resources, improving the overall performance of AI systems.
Deploying AI models in production environments, especially on edge devices with limited computational resources, can be challenging with Python. Python’s interpreted nature and runtime dependencies can lead to increased resource consumption and slower execution speeds. Compiled languages like Go or Rust, which offer lower runtime overhead and better control over system resources, are often preferred for deploying AI models on edge devices.
Enter Mojo
Mojo is a new programming language that promises to bridge the gap between Python’s ease of use and the lightning-fast performance required for cutting-edge AI applications. Modular, a company founded by Chris Lattner, the creator of the Swift programming language and LLVM compiler infrastructure, created the new language. Mojo is a superset of Python, which means developers can leverage their existing Python knowledge and codebases while unlocking unprecedented performance gains. Mojo’s creators claim that it can be up to 35,000 times faster than Python code.
At the heart of Mojo’s design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details.
One of Mojo’s key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo’s performance enhancements.
Mojo introduces several features that set it apart from Python. It supports static typing, which can help catch errors early in development and enable more efficient compilation. However, developers can still opt for dynamic typing when needed, providing flexibility and ease of use. The language introduces new keywords, such as “var” and “let,” which provide different levels of mutability. Mojo also includes a new “fn” keyword for defining functions within the strict type system.
Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation. These features contribute to Mojo’s performance optimizations and help developers write more efficient and error-free code.
One of Mojo’s most exciting aspects is its potential to accelerate AI development. With its ability to compile to highly optimized machine code that can run at native speeds on both CPUs and GPUs, Mojo enables developers to write complex AI applications without sacrificing performance. The language includes high-level abstractions for data parallelism, task parallelism, and pipelining, allowing developers to express sophisticated parallel algorithms with minimal code.
Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.
Leveraging the power of Open Source
According to Mojo’s creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August.
Despite its impressive performance and potential, Mojo’s adoption might have stalled initially due to its proprietary status.
However, Modular recently decided to open-source Mojo’s core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo’s adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.
Developers can now explore Mojo’s inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.
The permissive Apache License allows developers to freely use, modify, and distribute Mojo, encouraging the growth of a vibrant ecosystem around the language. As more developers build tools, libraries, and frameworks for Mojo, the language’s appeal will grow, attracting potential users who can benefit from rich resources and support. Mojo’s compatibility with other open-source licenses, such as GPL2, enables seamless integration with other open-source projects.
A whole new wave of AI-first programming
While Mojo is a promising new entrant, it’s not the only language trying to become the go-to choice for AI development. Several other emerging languages are also designed from the ground up with AI workloads in mind.
One notable example was Swift for TensorFlow, an ambitious project to bring the powerful language features of Swift to machine learning. Developed by a collaboration between Google and Apple, Swift for TensorFlow allowed developers to express complex machine learning models using native Swift syntax, with the added benefits of static typing, automatic differentiation, and XLA compilation for high-performance execution on accelerators. Google unfortunately stopped development and the project is now archived, which shows just how difficult it can be to get user traction with a new language development, even for a giant like Google.
Google has since increasingly focused on JAX, a library for high-performance numerical computing and machine learning (ML). JAX is a Python library that provides high-performance numerical computing and machine learning capabilities, supporting automatic differentiation, XLA compilation and efficient use of accelerators. While not a standalone language, JAX extends Python with a more declarative and functional style that aligns well with the mathematical foundations of machine learning.
The latest addition is Bend, a massively parallel, high-level programming language that compiles a Python-like language directly into GPU kernels. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations—no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI—a tool for every occasion.
These languages leverage modern language features and strong type systems to enable expressive and safe coding of AI algorithms while still providing high-performance execution on parallel hardware.
The dawn of a new era in AI development
The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance.
The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind.
This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more. The future of AI development and computing itself are being reshaped by the languages and tools we create today.
Author: James Thomason
Source: Venturebeat
Reviewed By: Editorial Team