AI & RoboticsNews

Zillow utilizes explainer AI, data to revolutionize how people sell houses

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!


Zillow has been a big name for online home seekers. There have been more than 135 million homes listed on the platform, and the company has streamlined the real estate transaction process from home loans, title, and buying. It says AI has been at the heart of success in providing customized search functions, product offerings, and accurate home valuations — with a claimed median error rate of less than 2%.

Zillow’s initial forays into AI in 2005 centered around blackbox models for prediction and accuracy, Stan Humphries, chief analytics officer at Zillow, said at VentureBeat’s virtual Transform 2021 conference on Tuesday. Over the past three or four years, as Zillow started purchasing homes directly from sellers, the company has shifted toward explainable frameworks to add context while still getting the same levels of accuracy from blackbox models. “That’s been a kind of a fun odyssey,” Humphries said, noting that the results needed to be as “understandable and intelligible” to a consumer as if they were talking with a real estate agent. Zillow took inspiration from Comparative Market Assessments (CMAs), which are estimated appraisals of the property provided by realtors, to create an algorithm analyzing three to five similar homes.

“Humans can wrap their heads around [that], and say, ‘Okay, I see that home’s pretty similar, but it’s got an extra bedroom, and now there’s been some adjustment for that extra bedroom’ [compared to] a fully ensembled model approach using a ton of different methodologies,” Humphries said.

(Humphries’ talk with VentureBeat’s Sage Lazarro can be seen in the video at top of this article, minus the live audience Q&A that followed.)

The move into explainable models helped consumers understand the value of their home, but also inspect it and get a “gut check” relative to their own intuition, Humphries said. Now he’s seen that it’s possible to have “the best of both worlds” — with accuracy from blackbox models and intuitiveness from explainable models — Humphries said that he wished Zillow had shifted approaches sooner.

Improving the appraisal model

Zestimate, the AI tool that allows Zillow to estimate market value for homes, has been able to draw on expanded information to improve.

“We think about our gains that we’re going to make as being data-related, [which include] getting new data features out of that data or their algorithm, or algorithm-related, which is new ways to combine and utilize the features that we’re doing,” Humphries said.

Zillow only used public record data for Zestimate in the past, but now the company incorporates information associated with previous sales of comparable homes. By utilizing natural language processing, Zestimate can pull information about what people wrote and said about the property when interacting with Zillow’s representatives. Another rich source of data has come from computer vision, which allows the company to mine data from the images associated with a home. People naturally read appraisals and then look at homes and make judgements about which house seems nicer. Zillow had to teach computers to do that same type of work, Humphries said.

In February 2006, Zestimate required 35,000 statistical models to estimate the market value of 2.4 million homes. Now the tool generates 7 million machine learning models to estimate 110 million homes nationwide.

“There’s been a lot of algorithmic advances in what we’re doing. But behind the scenes, there’s also been a huge amount of additional data that we take in now that we just didn’t back then,” Humphries said.

Zillow recently announced the release of Zestimate version 6.7. This update introduces a new framework that leverages neural networking within the ensemble approach, making the algorithm much more accurate and decreasing Zillow’s median absolute percentage error from 7.6% to 6.9%.

Zillow’s AI journey

The company’s technological innovation has to strike a balance between consumer interest and technological limitations. The team thinks about the pain points for consumers and the products that can help solve those challenges, but they also have to consider what they can actually build. In the case of Zestimate, customers were asking for the context behind appraisals. The representatives using the tool didn’t ask for insights generated by natural language processing and computer vision because they didn’t even know that would be possible, Humphries said.

The company is currently working on having users close their own deals with human agents. The goal is to have this evaluation eventually be completely machine-generated.

“The customer is kind of our North Star,” Humphries said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Allison Huang
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!