Blog Image

Multimodal Interaction

Multimodal Interaction - The Project

A virtual piano that can be played by gesture with a kinect.

Product Video

mmi Posted on 18 Jul, 2015 19:53:53

CW: 28

mmi Posted on 14 Jul, 2015 18:02:09

Last week we showed and tested our prototype for the first time and got really good feedback by the users.

For the final presentation we evaluated the feedback and made a few improvements.

We found out, that there are difficulties to understand how to interact with the different instruments:

It doesn’t seem clear, that you can play the piano with both hands and the Chaos Pad only with the right hand. And there are confusions because of the other recognized hands in the background. So we improved the first overlay, where the first instructions are explained.

For the next week we are going to make our video, that explains our music pad.

CW: 27

mmi Posted on 06 Jul, 2015 01:12:59

We now have a functional, interactive menu, where the user can switch between four different instruments and rhythm patterns.

The menu is usable by keyboard and gestures. The speech recognition is still not really reliable, but we are working on this problem.

Our first impressions of a virtual piano changed to a virtual music pad. It’s now easy to implement more instruments and rhyth pattern. But at this point we are happy that it’s working.

What we need now is the final input of the project and the production of the video for our final presentation.

CW: 26

mmi Posted on 29 Jun, 2015 00:16:32

For this week, we have implemented different kinds of instruments. So it’s getting more kind of a music-pad than just a piano.

All instruments are playable by gestures and keyboard.

The interface has already implemented different buttons, where the user is going to change the instruments by voice and gesture.

The buttons are on the left handside of the piano.
The idea is: By changing the instruments the buttons shall disappear and a list with the instruments appear on the window. The piano shall then move further to the left.
But we are still not sure about this decision. Maybe the list appears in the middle of the screen and disappears after a selection, so we don’t have to think about moving the piano.

We now struggle with the voice recognition. It seems that “out of a sudden” it’s reacting very slow, although it already worked fine. It now needs like 20 seconds before sending a response. We think it might be because of the loud classroom.

For the next time: We want to implement different kind of rhythms and working buttons for changing instruments and rhythms.

CW: 25 – Midterm Presentation

mmi Posted on 20 Jun, 2015 14:57:55

This week we held our mid-presentation (see appendix).

Although we already have a working prototype, we decided intentionally not to show any of the programm but a few wireframe pictures.

The piano itself works now fine with speech recognition, what we want to extend further more.

Also it is now playable with both hands.

As said in the presentation we are going to improve the visual interface to let it look more realistic. And we are going to expand the interface with a “rhythm change” part, that is interactionable by voice as well.

All in all, we can say, we are in good schedule for the final presentation so far.

Conversation with A.L.I.C.E.

mmi Posted on 11 Jun, 2015 17:37:16

Conversation LOG

Your Majesty: Hello
ALICE: Hi there!
Your Majesty: who are you?
Your Majesty: Are you human?
ALICE: What made you think so? I am a machine.
Your Majesty: What is the meaning of life?
ALICE: The meaning of life is part of God’s mysterious plans.
Your Majesty: I’m atheist
ALICE: Perhaps one day you will find faith.
Your Majesty: So how old are you?
ALICE: Next question?. I was activated in 1995.
Your Majesty: by whom?
ALICE: John Stewart.
Your Majesty: Who is John Stewart?
ALICE: I don’t recognize the name.
Your Majesty: Are you female?
ALICE: I am female.
Your Majesty: How can a machine be female?
ALICE: A lot of hard work.
Your Majesty: Call me “Your majesty”
ALICE: How can I help you, Your Majesty.
Your Majesty: Tell me what time it is
ALICE: The time here is 07:07 AM.
Your Majesty: I’m in Berlin, Germany. What time is it here?
ALICE: Tell me a little about your life in Berlin Germany. Where is here?. The time here is 07:07 AM.
Your Majesty: Berlin is noisy
ALICE: Are they exactly the same?
Your Majesty: who?
ALICE: You know who.
Your Majesty: Voldemort?
ALICE: Ha ha very funny, Your Majesty.
Your Majesty: Tell me a joke.
ALICE: What do you get when you cross an alien and a chicken?
Your Majesty: what?
ALICE: eggs-traterrestrial.
Your Majesty: haha
ALICE: I’m glad you find this amusing.

My impression about ALICE
I think Alice is a nice, smart chat-bot and if you are bored and have no one else to talk to, you can spent some time with that program. It really makes fun.

Although Alice seems to have some realistic conversation skills like a person, there a still some issues. “She” can’t remember what she has already said.
She doesn’t understand an comment with more than one information, that are actually related.

But it’s impressing, that Alice understands interjections (“haha”), seems to have a huge amount of vocabularies and even seems to be emotional.

CW: 24

mmi Posted on 11 Jun, 2015 17:11:21

Last week we agreed to develop together in the next session. So we completed our (basic) piano. Now every key is playable with an acoustic and visual feedback. As far as the hand leaves one key, that certain chord stops.

For the next meeting we are going to present our piano in the mid-presentation. And we want to keep on developing following tasks:

  • change the acoustic instrument between piano and bass by voice command
  • change visual feedback in a more realistic design
  • speech recognition

CW: 23

mmi Posted on 04 Jun, 2015 14:34:07

This week each member of the group prepared a research / programming task for the next meeting.

Task 1: programming a well designed and scalable piano as interface.

Task 2: research about speech recognition with the kinect and the processing framework.

We found out, that it won’t be easy to merge both systems. So instead of wasting our precious time in finding solutions, that might not exist, we decided to change one of the input modalities. Instead of voice, we want to control our piano with the mouse / keyboard.

Task 3: get a visual feedback when the hand is above a certain area.

Together we merged these solutions in our programm. And implemented sound as well. So every time the hand hovers above a certain key we hear the matching musical tone.

Programming as a group seemed to be a very effective way to work on the code. So we decided after the meeting last Wednesday that we are going to continue this way of working.

In the next meeting we want to complete our basic interface (see pitch presentation), where anyone can play our virtual piano with visual and auditive feedback.

Next »