AI & RoboticsNews

Top 5 stories of the week: AI continues to reign supreme

top five stories for the week

Check out all the on-demand sessions from the Intelligent Security Summit here.

Once again, AI topped the headlines this week, keeping our AI editor Sharon Goldman busy as ever.

Notably, tech behemoths Google, Microsoft and IBM are staking big claims in the technology — specifically in generative AI, conversational AI and advanced foundation models. And, AWS is leveraging ML to dramatically improve fulfillment efficiency.

Interested in reading more? Here are the top five stories for the week of February 6.

1. The ‘race starts today’ in search as Microsoft reveals new OpenAI-powered Bing, ‘copilot for the web’

The “race starts today” in search, according to Microsoft. The company says it is “going to move fast” with the announcement of a reimagined Bing search engine, Edge web browser and chat powered by OpenAI’s ChatGPT and generative AI.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

The new Bing for desktop — which will be free with ads — is available on limited preview; Microsoft plans to launch a mobile version in a few weeks. The search engine is running on a new, next-generation language model, Prometheus, which is more powerful than ChatGPT and customizable for search.

OpenAI CEO Sam Altman called the partnership the “beginning of a new era” with the aim to get AI into the hands of more people.

2. OpenAI rival Cohere AI has flown under the radar; that may be about to change

The execs at Cohere AI admit that their company is “crazy under the radar” — but that shouldn’t be the case. The OpenAI and Google rival offers developers and businesses access to natural language processing (NLP) powered by large language models (LLMs).

The company’s platform can be used to generate or analyze text for writing copy, moderating content, classifying data and extracting information — all at a massive scale. It is available through API as a managed service, via cloud machine learning (ML) platforms like Amazon Sagemaker and Google Vertex AI. For enterprise customers, the platform is available as private LLM deployments on VPC, or even on-premises.

Cohere is “squarely focused” on how it can add value to the enterprise, according to its CEO and cofounder Aidan Gomez. And, it may not remain unnoticed for long: There are rumors that Cohere is in talks to raise a funding round in the hundreds of millions.

3. How AWS used ML to help Amazon fulfillment centers reduce downtime by 70%

Amazon customers have gotten used to — and have high expectations for — ultrafast delivery.

But this doesn’t happen by magic. Instead, packages at hundreds of fulfillment centers traverse miles of conveyor and sorter systems every day.

And, Amazon needs its equipment to operate reliably if it hopes to quickly deliver packages.

The company has announced that it uses Amazon Monitron, an end-to-end machine learning (ML) system to detect abnormal behavior in industrial machinery and provide predictive maintenance. This leverages sensors, gateways and a companion mobile app.

And, as a result of the technology, Amazon has reduced unplanned downtime at the fulfillment centers by nearly 70%, which helps deliver more customer orders on time.

Google has declined to offer much new information about its Bard conversational AI search tool powered by the LaMDA model. A new blog post on Monday appeared to be a muted response to Microsoft’s CEO Satya Nadella’s verbiage at an event this week that the “race starts today” in search, and that “We’re going to move fast.”

After the Microsoft event, Google shares plunged 8%. Reuters reported that a Twitter advertisement for the new Bard service included inaccurate information about which satellite first took pictures of a planet outside the Earth’s solar system.

Google will initially release Bard with a lightweight, modern version of LaMDA to trusted testers before launching more broadly.

5. How IBM’s new supercomputer is making AI foundation models more enterprise-budget friendly

IBM announced this week that it has built out its AI supercomputer to serve as the foundation for its foundation model–training research and development initiatives. Named Vela, it’s been designed as a cloud-native system that makes use of industry-standard hardware, including x86 silicon, Nvidia GPUs and ethernet-based networking.

The software stack that enables the foundation model training makes use of a series of open-source technologies including Kubernetes, PyTorch and Ray. While IBM is only now officially revealing the existence of the Vela system, it has actually been online in various capacities since May 2022.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Volkswagen CEO says it's not a 'fantasy world' as 100,000 workers strike

Cleantech & EV'sNews

GM braces for a $5 billion hit as it fights to keep up in China's intensifying EV price war

Cleantech & EV'sNews

Tesla shuts down rumors of Cybertruck coming to China

AI & RoboticsNews

OpenAI appears poised to launch ChatGPT Pro subscription plans at $200 USD per month

Sign up for our Newsletter and
stay informed!