AI & RoboticsNews

Apple’s no-code Trinity AI platform handles complex spatial datasets

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Apple has been slowly but surely creating a name for itself in the low-code/no-code movement. This July, the Cupertino-based company announced the launch of Trinity AI, a no-code platform for complex spatial datasets. Trinity enables machine learning researchers and non-AI devs to tailor complex spatiotemporal datasets to fit deep learning models.

Back in 2019, Apple revealed SwiftUI, a programming language that required much less coding than the Swift language. With the release of Trinity, Apple doubles down on its effort to significantly lower the threshold for non-devs and non-ML devs.

Fusemachines CEO Sameer Maskey, who also teaches AI as an adjunct associate professor at Columbia University, sees Trinity as a great way for developers to use machine learning in their apps. “Initially, I see Trinity being used by devs who already create apps for iOS, but who don’t know machine learning, so they can incorporate spatial datasets in their work,” Maskey told VentureBeat.

We asked Maskey to give VentureBeat his take on Apple’s platform and what it means for the future of AI and low-code/no-code industry. This is a literal transcription of the interview.

VentureBeat: What makes Trinity different from other no-code AI platforms?

Sameer Maskey: It’s not so groundbreaking, really. By creating a similar system, the difference is that it’s more focused on geospatial data, like maps and moving objects. A lot of people are trying to build apps with geospatial data, for a phone. If you don’t know machine learning, but if you have a background building apps, now you can do it with Trinity.

Let’s say you’re trying to build an app that recommends the best places to eat in an area. Let’s also say you have access to how many people are going to that specific spot. Before, you’d have to collect all the data and stream the collected data and build it on a server or whatever system you were using. With neural networks, you experiment with many different models. For example, you find a model that predicts what are the best food places; you’d need to know all the different dev-ops behind it. All this becomes easier with Trinity, because you dump the data and provide targets of what you want it to do and do all the training; it does everything for you behind the curtains.

VentureBeat: What is Apple’s goal with this platform?

Maskey: I wouldn’t say it’s like, so, so groundbreaking in the sense that they are creating a similar system like other systems out there that’ve tried to do something similar. I guess the difference for Trinity is it’s more focused on geospatial data, particularly things related to maps and moving objects in maps. Especially with the phone, there are a lot of people who were trying to build all sorts of applications using geospatial data. And if they are trying to build an app on top of iPhone first, for some of them, it might be easier to use Trinity than other platforms because it’s probably very tightly integrated. Even if you don’t know machine learning, but you have a framework of building apps, you’re able to quickly tap into Trinity platform to build models to various ML work.

VentureBeat: Can you give us an example of how Trinity would work with geospatial apps?

Maskey: Sure. Let’s say you’re trying to build an app that automatically recommends the hottest food places to go. Let’s say it’s in a small part of the city. And you have somehow have access to the data of people in that location, like how many people are there, how many people are going to that location, and so forth. You get to basically predict what are the hot joints are and what hot joint would you like based on your preference.

And let’s say you take all of these streaming data, all of these location data. You would build it in your computer or on a server or whatever system you use — a lot of people write code in Jupiter notebooks — you try many different machine learning algorithms. You try, let’s say, even with neural networks, many different types and sizes of neural networks. You keep on experimenting with many, many different models and then say, OK, this is the model that does the best prediction of what the next popular food joint is going to be. After that, you have to productionize it. And let’s say your products, AWS or GCP, you would need to know all the dev ops behind them to be able to take it to production. And then create an API. All of this becomes easier in Trinity because Trinity allows you to just dump the data and provide the targets of what it wants to do. And it will figure out what machine learning algorithm to use, what kind of neural network architecture to choose to do all the training, and come up with all the production.

VentureBeat: Can Trinity really be used in a professional setting? Can we trust its prediction models or will it need fine tuning?

Maskey: Trinity and other similar platforms are professional systems, and for some problems, they work really well. They are good enough even for production grade systems. But in many cases, they’re not in the sense that they will provide maybe 5% less accuracy than an engineer who would tweak at the very low level on how the machine learning system is built. And they are able to squeeze out an additional 5% accuracy, which might be a difference in the competitive world where you’re charging money for the APIs.

VentureBeat: Where do you see the future of platforms like this? Low-code/no-code AI?

Maskey: AI is overhyped right now. I think more and more of these platforms will become more and more comprehensive in being able to support more than one different kind of machine learning systems within it, and more and more information on the algorithms we need within it. Hopefully the accuracy will improve on various sets of tasks. Probably at some point they will become more specialized. Trinity’s already a more specialized version of these kinds of systems, which is more focused on geospatial data, but my guess is they will expand beyond geospatial data as well later on.

I think in general, more platforms will launch and they will be more and more specialized. And if they get accuracy to a level where they’re pretty much in par with what developers are able to do now, then it really becomes a transformative tool. Because at that point then, a lot of machine learning engineers will not be needed for a lot of the current AI bids.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Susana Mendoza
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!