How Stanford University is putting humans at the centre of AI

Stanford University's front lawn and main building on a sunny day, with the bell tower in the background
(Image credit: Shutterstock)

As AI has begun to have more and more of an impact on our day-to-day lives, we’re understandably seeing a growing interest in its effect on society.

There have been many people who have raised concerns about potential problems that could be caused by AI, and we’ve seen examples first hand – just look at the recent A-level results fiasco. We’ve also heard prophecies that AI will replace human jobs, but on the flipside, we’re also seeing increased research into how the technology can be used to address real-world problems.

RELATED RESOURCE

IT Pro 20/20: Augmenting our new reality

Our second issue looks at the technology that's helping to make us all more mobile

FREE DOWNLOAD

At Stanford University in California, a research centre has been set up specifically to work on advancing AI research, education and policy that will benefit humanity; the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

“The institute was set up to look at three areas; the human impact of AI, how AI can augment rather than replace people, and improving the ‘intelligence’ part of AI,” says James Landay, an associate director at the HAI and professor of computer science at Stanford University.

The HAI supports the development of new research into these areas, in particular interdisciplinary projects that involve academics from across Stanford’s engineering, social science and medical schools. It also works hard on improving education, both internally and externally, helping people to understand the technology and how it can impact the world.

Educating the lawmakers and businessmen

“We have courses for students, but we also provide education for folks externally who need to know about this technology, such as staff in US Congress and other lawmakers,” says Landay. “It’s important that they know how it works if they’re going to start legislating, let’s say, about privacy in respect to machine learning. Therefore, we put on educational opportunities for staff from Capitol Hill, and also those in industry who want to better understand these trends.”

The HAI also does a lot of work around policy, working closely with experts from Stanford University’s economic and business schools to ensure the impact of AI in any potential policies is good for society. “We convene conferences that bring people with different perspectives together to really hammer out what the key issues are,” Landay says.

He notes that AI is very much at the positive end of the hype cycle right now, but companies at the leading edge are coming to understand that there are social impact and ethical issues around AI. “For example, we’ve seen some of the big companies pull out of facial recognition technology right now, because they understand that we don’t yet know how to make sure it doesn’t discriminate.

“Our hope is that we can help shift the conversation, to make sure [more] companies are paying attention to AI’s impact on people as well as their bottom line.”

Harnessing AI to benefit humanity

To help build on its successes, the HAI this year launched a new grant programme to support research that “harnesses AI to build a better future for humanity”. In the past, the institute had offered a small programme of seed grants to postgraduate students with interesting ideas, but this year, with the introduction of the Hoffman-Yee Research Grant Program, things are really ramping up.

“We’re able to give grants to teams of interdisciplinary researchers with bold ideas that still need to be proven. If we can get these bold ideas working, then they’ll be able to find more traditional funding,” Landay explains.

The HAI received submissions from over 20 departments across all of Stanford’s schools, and six teams were chosen to receive the first round of funding. This will help them hire the researchers and purchase the equipment they need. In addition, the HAI is able to offer cloud computing credits to the research teams, thanks to the support of corporate partners.

“We’re able to say here’s your grant, but also $50,000 worth of Amazon or Microsoft cloud credits,” says Landay.

The projects cover everything from engineering to humanities, with goals ranging from using AI to facilitate and improve student learning, elderly care and government operations through to developing next-generation, language-based virtual agents capable of collaborating with humans better than ever before.

RELATED RESOURCE

The IT Pro Podcast: Can AI ever be ethical?

As AI grows in sophistication, how can we make sure it’s being developed responsibly?

FREE DOWNLOAD

“For example, one project brings together professors of computer science, education and psychology to look at how we can make AI tutors that really help prepare students for the 21st century workforce,” Landay says. “This might be teens or people looking to retrain. We’re not looking to replace teachers but augment them in some way, so the learner can get more information, but also give diagnostics to the teacher so they can better understand where there are problems.”

Controversial topics

At the other end of the scale is a humanities project led by the chair of linguistics looking at how concepts and terms change over time. “A concept we have now meant something totally different a hundred or two hundred years ago,” says Landay. “They want to understand how those concepts changed over time, so are using natural language processing (NLP) to look at all of these old books and develop a tool in the humanities to better understand that.

“It’s a novel and exciting idea, one that’s a little controversial within the humanities community. These are the kinds of things we want to fund though, where we’re taking a risk. It might fail, but if it works it could be a big thing.”

Now the clock is ticking – each team has 12 months to show results, which could be anything from developing a more concrete idea through to having made some early steps. From there the HAI will choose three projects to receive substantial additional funding for a further two years.

When will we see results?

As ever with this type of research, the question quickly arises of when any of it is likely to trickle down into wider society. This, Landay says, all depends on the type of problem the project has chosen to address. “One team is working on robotic devices that can detect when an elderly person is about to fall and stop them. It sounds like a great idea if you can make it work. However, making that in a way that’s a practical device any of us would want to wear feels pretty far out. It might take 10 years to get there. But then I’ve got another team looking at how AI-driven, evidence-based learning could drive efficiencies and improve services within US government agencies. That kind of work could lead to software that could be implemented in a shorter period of time.”

RELATED RESOURCE

Intelligent process automation

Boosting bots with AI and machine learning

FREE DOWNLOAD

Although we might not see results for some time yet, the grant scheme is already helping support the HAI’s key goals. The project announcements have pushed these ideas out to a much broader audience, allowing people to see what AI could potentially offer society. It’s an exciting area to watch, and if even just one of these projects comes to fruition, it will definitely make a positive impact on society.

Keri Allan

Keri Allan is a freelancer with 20 years of experience writing about technology and has written for publications including the Guardian, the Sunday Times, CIO, E&T and Arabian Computer News. She specialises in areas including the cloud, IoT, AI, machine learning and digital transformation.