News + Trends

Panasonic Educational Partner: The apocalypse is about the size of a football

Dominik Bärlocher
6.9.2017
Translation: machine translated

We will all die. Robots will come and finish us off. The latest threat is called Panasonic Educational Partner and is a fascinating prototype that can and wants to do a lot, but is really scary.

The IFA in Berlin is not just about presenting new products that will soon be on sale. Here and there, technology will be shown that doesn't even exist yet. Or won't be for a while yet. One of these prototypes is the Panasonic Educational Partner, which not only sets the technological bar high, but also raises the creepy factor to unimagined heights.

But before I invoke the robot apocalypse and loudly proclaim that we are all doomed to die, an explanation of the Educational Partner. And I'll also quickly explain why I hope the robots won't kill us all.

The future in football form

The Panasonic Educational Partner is small, compact and neutral in colour. It is a sphere that is about the size of a football. The ball consists of three elements.

A sketched floor plan of the Educational Partner

Sensors such as cameras, microphones and environmental sensors are installed in the side partitions. The centre section is milky glass and serves as a means of locomotion for the robot head as well as a display surface for the face. To make the Educational Partner look a little more human, but not too human, it has an emoji-like face, which is animated and is intended to anthropomorphise the device. Behind it are microphones, LEDs and plenty of computing power.

The Educational Partner should one day become a companion for children aged three to six. This is where the device gets really good, because the grinning football, which speaks in a child's voice with a slight lisp, is to be connected to all kinds of cloud services. These are designed to keep him constantly informed about new playing opportunities, medical findings and data, as well as a whole mountain of other things.

"The Educational Partner is designed to instil strong moral and ethical values in the child," says a voice from off-screen during a presentation. The presentation on the screen gives an example of healthy eating. The educational partner tells a story that essentially goes like this: "Eat your carrots and you'll be as strong as Carrot Man".

As a mum or dad, you don't even have to be in the same room as your child to fulfil your parental duties thanks to the Educational Partner. The football can regularly take pictures of your offspring and send them to your smartphone. As the little robot is connected to medical databases, it should also be able to diagnose some things itself and at the very least issue a "Take your child to the doctor" warning if the child's data deviates from a baseline.

Why we are all going to die

The lisping, cute robot is creepy. Despite his childish voice, his voice sounds artificial. The style of some sentences is wrong, the emoji faces don't always match what is being said. He talks about moral and ethical values and makes a winking smiley face. Not directly trustworthy.

Alarm bells are ringing in my head that there is a lot of surveillance technology built into the rolling robot head. Not just a camera, but probably also biometric sensors and other environmental data readers. I joke with video producer Stephanie Tresch that Panasonic is probably installing knives that will stab everyone as soon as the apocalypse begins. It's impossible to check the exact hardware setup, as there are no ports or access routes to the inner workings of the ball. At least not at first glance.

"We'll even pay for the robots we buy to kill us in our household," I say. A man next to me starts laughing. Stephanie records it all on camera, but says that the take is rubbish because she has to laugh along with him.

Realistically, however, buying a robot like the Educational Partner comes with a whole host of risks. What if third parties gain access to one of the services with which the creepy robot head communicates? Suddenly you get advice like "Shake your child to make it stop crying", which is actually cruelly wrong, but because the robot trusts the service and the parents trust the clumsy robot that wiggles back and forth so sweetly when it speaks... disaster strikes.

The only mitigation here is that the robot limits its medical diagnostics and only outputs straightforward data such as "Your child's temperature is elevated. Please see a doctor". But you and I both know that it won't stop there. Because we humans have a tendency to exaggerate such things while we're at it. And why shouldn't we? The little robot offers too many cool possibilities. The tech geek in me knows that not utilising them would be a total waste.

We also have to ask ourselves whether we really want to leave our children's education to a little robot. Where should a three-year-old get her first stories of wild adventures? From a box that only talks about Carrot Man in an educationally valuable way or should it also be human nonsense about space pirates? What will happen to our children when they all grow up with stories about Carrot Man? Will they already be standardised in kindergarten? Will they all talk about Panasonic, grow up with the brand and be emotionally attached to a large corporation for life? I don't have the answers to these questions and I don't think anyone does yet. But now that a robot head is trying to raise children on a table in front of the world's eyes, I think we need to ask ourselves these questions.

"And the thing could be armed with knives," I add. Stephanie messes up the take again. After six days at IFA, we're definitely too tired for a serious discussion of IT security risks in the case of educational robots.

Why we all won't die

The whole scenario of built-in knives is of course nonsense. Nevertheless, such a harmless little ball harbours a lot of danger to life, limb and, to a certain extent, humanity. Authors have been thinking about this for a long time. I am one who thinks that the talk of "$author foresaw $thing" is nonsense. Because fiction is just that, fiction.

  • A possible scenario involving the imprinting of large corporations was described by author Max Barry in his novel "Jennifer Government"
  • The constant surveillance was addressed by Dave Eggers with little emotional impact in "The Circle"
  • Indoctrination and, above all, control is the main theme of George Orwell's "1984"

When it comes to robotics, however, science fiction author Isaac Asimov was a thought leader, if you can call a fiction writer a thought leader. In his short story "Runaround", he wrote down three laws that every robot must obey. These are still more or less the standard in the development of artificial intelligence today.

  1. A robot must not harm a human being or allow harm to be done to a human being through inaction.
  2. A robot must obey commands given to it by a human being - unless such a command would conflict with rule one.
  3. A robot must protect its existence as long as this protection does not conflict with rule one or two.

In 1975, Asimov then relaxed the laws somewhat and added the word "knowingly" to the first rule. This means that a robot may not knowingly injure a human being or knowingly watch a human being being injured. He also added a "zeroth" law in 1985, which prohibits a robot from harming humanity as a collective.

Thus, even if it were armed with knives, the Educational Partner would not be allowed to use these blades to stab you and/or your offspring in the face when you are careless or asleep until its biodata sensors detect complete exsanguination. But that doesn't stop the little robot from putting fleas in the children's ears. After all, if it generally passes on well-meaning messages from Panasonic and/or selected partners, then it is not breaking any laws. The danger doesn't just come from hackers who want to trick you into harming your children, but also from the manufacturer. A few million here or there for a slogan on a topic... who can resist that? And anyway, if it's okay for the educational partner to talk about carrots, why shouldn't it be okay for him to say "Nestlé's carrots taste great"? It's almost the same thing, isn't it?

At the end of this train of thought, Stephanie Tresch holds the camera up to the robot head's face and I hold the microphone in the general vicinity of the animated mouth and wonder whether the speaker is really there. The Educational Partner wobbles back and forth, lisping to us and after the successful take, the white plastic ball is covered again with a perspex bell. Team digitec walks away with full memory cards and empty batteries. We have no answers, but a lot of questions and a slightly uneasy feeling, even as we joke about our impending doom.

8 people like this article


User Avatar
User Avatar

Journalist. Author. Hacker. A storyteller searching for boundaries, secrets and taboos – putting the world to paper. Not because I can but because I can’t not.

1 comment

Avatar
later