Google Lamda: the next big thing in chat tech

Google has announced a UK version of an app that lets users interact with the artificial-intelligence system one of its engineers has claimed is sentient.

It only has three scenarios to choose from and it’s a very limited trial.

And while Google wants feedback about how its Language Model for Dialogue Applications (Lamda) performs, the app’s users cannot teach it any new tricks.

The company has always maintained the technology and software our chatbot uses, referred to as neutral non-sentient technology.

People can join the AI Test Kitchen waiting list via their Google account on either an Android or Apple device. Once they’re approved, they’ll have immediate access to the app.

When we launched in the US, it took about three or four days for thousands of people to sign up.

Marshmallow

I wanted to meet Lamda as soon as it was announced that it might be conscious. It captured everyone’s imagination with its brilliant claim, even though it was very clearly doing what it was programmed to do.

That’s why I’ve been (mildly) persistent with Google – and when the call came, it was laden with caveats.

I wasn’t allowed to record anything or quote anyone

I had a long wait

The app was released on time, and I couldn’t be more grateful.

My demo was in the hands of a member of staff

The three different scenarios for the poll were:

Check out Lamda’s imaginative installation on the deck of their HQ, where they built a coffee table using unique parts of coffee and other materials.

The tasks list breaks down what needs to be completed or completed by a certain date in order to do something.

Dogs – everything you need to know.

First, I asked Lamda to imagine it was on a marshmallow planet.

And the response was far more sophisticated than anything I had seen from the chatbots in everyday life, on shopping sites and banking apps for example.

The sentence provided is a common first sentence for any piece of writing, which makes it redundant and unnecessary.

The point is that the first sentence doesn’t make sense.

I’m asking the sky what the “sticky pink” stuff in the sky is, and so far I’ve received a response about candy floss.

When users answer questions and give each answer a rating, they’re asked to rate each answer on a scale of one to five stars.


offensive

Sentences on topics you didn’t ask for

some

And a prominent disclaimer across the top of the screen warns that some content may be inappropriate.

With the recent launch of our Google Assistant and Alexa skills, we partnered with technology giants to ensure that our skills will not learn from their interactions with users.

I then followed up with the app about how to create a vegetable garden and received detailed advice on what size should be, what soil to use, and how to apply fertilizer.

There was a tense moment – where I tried to redirect the conversation.

“I’m sorry, I can’t seem to think of anything to say. Anyway…” it replied, before returning to its prescribed theme.

This sentence makes no sense whatsoever.

Lambda allows programmers to bring their ideas to reality

It was a very cautious peek at something that felt like it could be a powerful tool but that Google doesn’t seem to want to take seriously just yet.

I am uncertain how I’ll ever be allowed to interact with Lambda. However, I’m confident in my ability to keep asking.

If anything gives the bot away, it’s that it has a way of staying conversational and humble – more like talking to Stephen Fry than your next-door neighbor.

Sometimes when we’re feeling uneasy, it helps to focus on something very small for a moment. I’ll ask my partner to imagine that he’s on a marshmallow planet and see what happens.

“It’s going to be hard work,” he replied, warily.

Recent Articles

Related Stories

Stay on op - Ge the daily news in your inbox