AI & RoboticsNews

Facebook claims its AI can anticipate COVID-19 outcomes using X-rays

Researchers at Facebook and New York University (NYU) claim to have developed three machine learning models that could help doctors predict how a COVID-19 patient’s condition might develop. The open-sourced models, all of which require no more than a sequence of X-rays, ostensibly predict patient deterioration up to four days in advance and the amount of supplemental oxygen (if any) a patient…
Read more
AI & RoboticsNews

Lumiata raises $14 million to predict health care costs and outcomes with AI

Lumiata, a company providing AI-powered predictive analytics for managing health care costs, has raised $14 million. The company says it will use the funds to scale its platform and invest in customer acquisition ahead of the opening of an office in Guadalajara, Mexico in 2021. As many as 3.5 million adult hospital stays in 2017 — to the tune of nearly $34 billion — were considered potentially…
Read more
AI & RoboticsNews

The AI arms race comes to enterprise content management

Enterprise content management (ECM) platforms that have historically been employed to manage files are, thanks to the rise of AI, about to evolve into central repositories for keeping track of relationships between a much wider range of types of data. Fresh off raising an…
AI & RoboticsNews

Salesforce researchers release framework to test NLP model robustness

In the subfield of machine learning known as natural language processing (NLP), robustness testing is the exception rather than the norm. That’s particularly problematic in light of work showing that many NLP models leverage spurious connections that inhibit their performance outside of specific tests. One report found that 60% to 70% of answers given by NLP models were embedded somewhere in the…
Read more
AI & RoboticsNews

Google trained a trillion-parameter AI language model

Parameters are the key to machine learning algorithms. They’re the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. For example, OpenAI’s GPT-3 — one of the largest language models ever trained, at 175 billion parameters — can make…
Read more