Tuesday, April 26, 2011

Paper Reading #25 - Using language complexity to measure cognitive load for adaptive interaction design

Reference:
M. Asif Khawaja, Fang Chen, and Nadine Marcus.
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces

Summary:
This paper is about an adaptive interaction system.  Te system keeps track of the users' current cognitive load, and can change its response, presentation and interaction flow to improve the users' experience and performance.  The authors propose a speech content analysis approach for measuring the users cognitive load.  The system will analyze the users language and dialogue complexity.
Discussion:
I think that this system could be useful.  Unfortunately, this paper focuses on the analysis of the users load rather than what the system would do the help the users performance.  I think that that would be more interesting than this.

 

Paper Reading #24 - Mobia Modeler: easing the creation process of mobile applications for non-technical users

Reference:
Florence Balagtas-Fernandez, Max Tafelmayer, and Heinrich Hussmann.
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces

Summary:
This paper is about a tool that would make it easy for people without programming skills to build mobile applications.  While there are mobile companies that are opening their APIs and tools, those without programming skills are unable to create something.  The authors present a tool to help this. They use the creating of an application in the area of mobile health monitoring as a proof of concept.



Discussion:
I think two things about this paper.  First, as a programmer, I think that it is horrible.  being able to program is an acquired skill, and why should people be allowed to take short cuts.  It is like developing something that could play the piano just by telling it what to do.  It takes the technical ability away from doing something.  Secondly however, I think that it could be interesting to see what people with no programming skills can do with a tool like this.

 

Paper Reading #23 - Evaluating the design of inclusive interfaces by simulation

Reference:
Pradipta Biswas, and Peter Robinson.
Venue: IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces

Summary:
This paper is about a simulator that will help in the design and testing of assistive interfaces.  The system can predict interaction patterns from a variety of input devices.  They present a study done to evaluate the simulator.  They considered a representative application being used by able-bodied, visually impaired and mobility impaired people.  The simulator predicted task completion times with high accuracy.


Discussion:
I think that this is an interesting system.  I have already read about a paper on a system to help with assistive interfaces, but I was intrigued that this was focused on the interfaces rather than actually providing assistance.  This system would be helpful when evaluating systems.

Paper Reading #22 - From documents to tasks: deriving user tasks from document usage patterns

Reference:
Oliver Brdiczka
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces

Summary:
This paper is about a new system to assist users in task switching.  Most workers switch between multiple tasks in a day.  The switches require recovery time in between to get reacquainted with the new task.  Since these switches happen frequently in a typical work day, task management systems were developed to aid workers.  Typical systems, unfortunately, require a lot of investment on the user side, from either learning the system or training the system.  The new system proposed in the paper automates the estimations of users tasks from document interaction.  Instead of looking at the content of the documents, which can violate the users privacy, this system monitors the desktop activities and stores an identifier for each document on the users desktop. 


Discussion:
I think that this system could be useful.  I did like the fact that the system focuses on document switches rather than the information.  This means workers with classified information would still be able to use this system.  I am interested to see how this system gets developed further.




Paper Reading #21 - iSlideShow: a content-aware slideshow system

Reference:
Jiajian Chen, Jun Xiao and Yuli Gao.
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces
 
 
Summary:
This paper is about a photo slideshow system.  The system can automatically analyze thematic information about the collection of photos.  The system can then generate slides for two modes: story-telling, and person-highlighting.  In the story-telling mode, the system clusters photos by a theme-based clustering algorithm, and tiles multiple photos on a slide.  There are many tiling layouts, and the slideshow is animated by transitions.  In the person-highlighting mode, the system begins by recognizing faces from photos.  Then it creates photo clusters for each individual. 
 
Discussion:
I think that this system is pretty cool.  I like that it can sort by theme or by person.  This seems similar in concept to another paper I read about, the touch interface scrapbook.  They both take something that is usually done by hand, and automate it.  I think that this one rather than the other would be more popular though.


 

Paper Reading #20 - Raconteur: from intent to stories

Reference:
Chi, Pei-Yu and Lieberman, Henry
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces



Summary:
This paper is about Raconteur.  Raconteur is a system that is designed to help users create stories from annotated media elements.  The system uses the AnalogySpace Commonsense reasoning technique.  The system is designed to help users understand how a story fits together.  The focus is on pictures and videos to help novice editors.   

Discussion:
I'm not sure what to think about this system.  On one hand, it will help users understand a story and how each piece fits together.  However, on the other hand, it is intended to be used by novice editors.  I don't know if this is good or not.  This new system may take jobs away from good editors because any novice can use this software.

Paper Reading #19 - Social signal processing: detecting small group interaction in leisure activity

Reference:
Eyal Dim, Tsvi Kuflik
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces

Summary:
This paper is about social signal processing of small groups.  The social interactions are monitored for things like how close someone is for starting a conversation and voice communication.  If a system can understand the social interactions of a group, it can intervene and suggest relevant information at the right time.  The authors conducted a study to determine the possibility of automated detection of group interaction in a museum.  The study was done in the Tel-Aviv Museum of Arts and was conducted on 58 small groups.

Discussion:
I think that this was interesting.  The research could have an impact on future products.  At first I thought that the interactions would be difficult to map because everyone acts differently with different people, but I realized that there are still similarities in how people act.  I think that the research should be expanded to somewhere other than a museum to get more information though.


Paper Reading #18 - Personalized user interfaces for product configuration

Reference:
Felfernig Alexander, Mandl Monika, Tiihonen Juha, Schubert Monika, Leitner Gerhard
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces



Summary:
This paper is about configuration techniques of personalized default values for users.  Since many products today are widely distributed, the default settings are set to be general rather than personalized.  Often, the default settings can be difficult for the user to understand, and get accustomed to.  The authors conducted a empirical study, and found an improvement in user satisfaction and the quality of the configuration process.


Discussion:
While the paper was interesting, it was difficult to understand at times.  There were many mathematical formulas and technical terms.  Also, there were no images to help understand the ideas.  The paper definitely needs to be changed for average readers to understand.





Monday, April 25, 2011

Paper Reading #17 - A natural language interface of thorough coverage by concordance with knowledge bases

Reference:
Yong-Jin Han, Tae-Gil Noh, Seong-Bae Park, Se Young Park, Sang-Jo Lee
IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces



Summary:
One of the critical problems in natural language interfaces is the discordance between the expressions covered by the interface and those by the knowledge base. In the graph- based knowledge base such as an ontology, all possible queries can be prepared in advance. As a solution of the discordance problem in natural language interfaces, this paper proposes a method that translates a natural language query into a formal language query such as SPARQL. In this paper, a user query is translated into a formal language by choosing the most appropriate query from the prepared queries. The experimental results show a high accuracy and coverage for the given knowledge base.
 
Discussion:
There has always been a difficulty in translating from a natural language into a formal language.  There are often too many subtle nuances that get overlooked.  However, this system seems like it could be useful, and I thought that that it sounded interesting.
 
 

Paper Reading #16 - Mixture model based label association techniques for web accessibility

Reference:
Muhammad Asiful Islam, Yevgen Borodin, I. V. Ramakrishnan

UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology


Summary:
For most people, reading a web page is not a problem.  However, what they don't realize is that they intuit information from the whole page to understand it.  Unfortunately, blind users are forced to use a screen reader to aid them.  The screen readers are limited in what they can do though.  If there is an error, like a typo, the reader will not realize it.  This may seem trivial, but when filling out a form for online shopping or bill paying, this can cause problems.  This paper is about a Finite Mixture Model (FMM). This is a system that will take a form element and calculate the most likely label.  In addition, a user study with two blind people is included in the paper.


Discussion:
I think that this is a good way of how technology can be used to aid people.  This would be very helpful to blind people.  Unfortunately, the paper got technical, and was difficult to understand at times.

Paper Reading #15 - TurKit: human computation algorithms on mechanical turk

References:
Greg Little, Lydia B. Chilton, Max Goldman, Robert C, Miller
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology


Summary:
This paper is about software called TurKit. Mechanical Turk provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call.  TurKit is a toolkit that provides a way of exploring human computation, while maintaining an imperative programming style.  The authors provide applications for human computation algorithms, and case studies where TurKit is used in real experiments.

Discussion:
Many companies use Mechanical Turk in their customer reviews today.  This relies on human computation rather than computer algorithms.  With this new system however, there may be a way to change the current systems to utilize TurKit.

Paper Reading # 14 - A framework for robust and flexible handling of inputs with uncertainty

References:
Julia Schwarz, Scott E. Hudson, Jennifer Mankoff, Andrew D. Wilson
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
 
Summary:
New input technologies like touch, recognition based input such as pen gestures and next generation interactions all provide for more natural user interfaces. However, these techniques all create inputs with some uncertainty. Conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. The authors present a framework for handling input with uncertainty in a systematic, extensible, and easy to manipulate fashion. A probabilistic finite state machine can be developed to handle the uncertanty principle, like touching in between two buttons. 
 
 
Discussion:
I think that this would be very useful.  When using touch based input, I have found that often the device misreads where I pressed.  This gets annoying very fast.  Something like this would help reduce the errors that are prone to happen with different inputs.  
 
 

Paper Reading # 13 - Gestalt: integrated support for implementation and analysis in machine learning

References:
Kayur Patel, Naomi Bancroft, Steven M. Drucker, James Fogarty, Andrew J. Ko, James A. Landay
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and
technology



Summary:
This paper is about a new development envirnonment for machine learning called Gestalt.  Where most programming environments focus on source code, Gestalt works with both source and data.  Gestalt allows developers to create a classification pipeline, follow data through that pipeline, and offers easy transition between implementation and analysis.  An experiment was conducted with this new environment, and a significant increase of bug detection and fixes was shown. 

Discussion:
I think that this could be a very useful tool.  When developing code, one of the hardest things is to find bugs, and if Gestalt makes it easier, I would like to give it a try.  Also, when just looking at the code, it can be difficult to visualize what is going on, so it is important that the environment can transition easily.


Paper Reading # 12 - Pen + touch = new tools

References:
Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton
Conference: UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
 
Summary:
This paper is about a new interface that relies on a pen for input as well as touch.  They began by observing how people use a pen with a notebook, and went from there.  They decided to split up the labor.  In general, the pen writes, touch manipulates, and a combination of the two produces other tools.  They used a Microsoft Surface and an LED pen for testing.  The prototype was developed for mainly note taking, and scrap booking.  The paper describes some of the different tools that can be used, like holding a picture and dragging with the pen to create a copy and holding photos and tapping one with the pen to staple them together.

Discussion:
I think that this is a design with a limited distribution base.  As there are two sides of the system, it will draw two separate crowds.  For people looking to take notes, this system will have to be well made to compete with the systems that have been out for a while, like tablet PCs.  For scrap-booking,  my guess is that most people who already scrapbook enjoy what they are doing without using a computer.  I think that more will buy this product that have not scrap-booked before, because those who have will feel that this takes away from the personalization of what they do.  However, that is just my opinion, I could be completely wrong.  I do think that some of these techniques could be incorporated into existing systems though.


Paper Reading #7 - There's a monster in my kitchen: using aversive feedback to motivate behaviour change

References:
Karam, M. (2010). The coffee lab: developing a public usability space. Proceeding of the Acm conference on human factors in computing systems (pp. 2671-2680). Atlanta: http://www.sigchi.org/chi2010/. 


Summary:
This paper is about a new idea for conducting usability studies in a public space.  The lab is set up in a coffee shop in Toronto.  The lab consists of several interactive systems.  The fact that the public usability tests are conducted outside of a lab is new.  This allows the study to get a better variety of candidates to participate in the testing.  Which allows for a more accurate view of the public's reactions.

Discussion:
I think that this is a great concept.  Studies are always trying to test the participants natural reactions, but this cannot be done when they are outside of their natural environment.  This study allows the participants to act naturally while still testing the system, which allows for a more genuine reaction.  This in turn gives Karam better information about the systems.


Paper Reading #6 - There's a monster in my kitchen: using aversive feedback to motivate behaviour change

References:
Kirman, B., et al. (2010). There's a monster in my kitchen: using aversive feedback to motivate behaviour change. Proceeding of the Acm conference on human factors in computing systems (pp. 2685-2694). Atlanta: http://www.sigchi.org/chi2010/.

Summary:
This paper is about system and power usage.  The system design is based on negative feedback.  They use a kitchen as an example.  When an appliance is used with poor power management, the user will get a verbal rebuke, or even a text message.  Also, the system will be given enough control to reduce power drain of certain appliances that have been misused in the past.

Discussion:
I think that this system could be useful.  Most people probably don't even realize  when they are using appliances poorly.  This would give them a chance to fix that, and save some money on electricity bills.  That being said, I am not sure that only negative reinforcement is the best option.  There are people that don't react well to negative reinforcement.  I would think that a mix between positive and negative reinforcement would be better. 

Paper Reading #5 - A multi-touch enabled steering wheel: exploring the design space



References:
Pfeiffer, M., et al. (2010). A multi-touch enable steering wheel - exploring the design space. Proceeding of the Acm conference on human factors in computing systems (pp. 3355-3360). Atlanta: http://www.sigchi.org/chi2010/. 

Summary:
This paper is about a new way to control functions in a car.  The purpose is to keep the drivers hands on the steering wheel while still allowing them to control things like the radio and GPS. In addition, the driver is able to create different gestures for each function to enable them to customize for what each person feels is natural.  They tested this with a driver in a simulation with a prototype of the steering wheel.  What they found was that many gestures were already in place, like pinching in for zooming out on a map.  

Discussion:
This could be a good idea, but it also could be a bad idea.  It is cool that you can create personalized gestures for different functions.  Also, it will allow the driver to keep their hands on the wheel.  However, there is a risk of the driver accidentally making a gesture which causes something to happen that the driver wasn't expecting.  Another thing is that this may take too much attention for the driver to use which may lead to accidents.  All in all, I think that this has promise, but I would definitely want safety tests first.  

Paper Reading #2 - Communicating software agreement content using narrative pictograms

Title:
Communicating software agreement content using narrative pictograms 

References:
Communicating software agreement content using narrative pictograms
Matthew Kay and Michael Terry
CHI EA '10 Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems

Summary:
This paper is about using pictures to convey the terms if a licensing agreement.  Software agreements are currently text only.  They are usually many pages long, and use language that is difficult for average users to understand.  This paper discusses rules for incorporating images into the agreements to help users understand what they are agreeing to, or potentially give an agreement that is only based on pictures.  The authors believe that developing this method will help users understand the terms of the licensing agreements while not forcing them to learn technical jargon. 

Discussion:
I think that this is a great idea.  Whenever I download new software, whether it is the new version of Itunes or a web-browser to replace IE, the licensing agreements are annoying.  I doubt that I'm the only one that has clicked "I Agree" without reading the terms completely, or even at all.  The average user doesn't want to spend half an hour trying to read the agreement and looking up something for every other sentence.  I think that if images would be incorporated, this would keep the users informed without wasting their time.