ninth Asian Computing Science Conference, Dec. 8 10, 2004, Chiang Mai
In many areas of Computer Science, the focus of efforts is moving away from
concerns of data to address the information that this data represents. This
movement is reflected also in patterns of employment and in the development
of branches of other sciences that employ Computer Science techniques as tools
in their study. A challenge for Computer Science is the development of concepts
and tools that support more directly the information-oriented nature of many
The theme of this conference is HIGHER-LEVEL DECISION-MAKING. "Higher-level"
refers to a decision-making process that may involve the gathering of information,
analysis, reflection, negotiation, and the synthesis of a decision and its
justification. Whether the final decision is made by a software or human agent,
large parts - and perhaps all - of such a process can be automated. In pursuit
of this goal, new concepts and techniques in the design and development of
such processes and their elements are needed.
Using knowledge to guide search: an interactive prototype,
by: Dr. Paul Janecek,
Swiss Federal Institute of Technology, Switzerland.
Tuesday, Sep. 28, 2004, 10:00, CSIM#209.
In this presentation I give a brief overview of the role of domain knowledge
in search and then introduce semantic fisheye views (SFEVs), an interactive
"focus + context" visualization technique. SFEVs interactively manage
the complexity of visual representations by emphasizing and increasing the
detail of information related to the user's current focus and de-emphasizing
or filtering less important information. I will demonstrate a prototype that
uses this technique to interactively explore a large annotated image collection
using the semantic relationships modeled in WordNet. In conclusion, I will
discuss future directions of research for using ontologies and metadata to
Incentive-Compatible Social Choice, by: Prof.
Boi Faltings, Swiss Federal
Institute of Technology, Switzerland.
Friday, Sep. 17, 2004, 15:30, CSIM #209.
Many situations present a social choice problem where different self-interested
agents have to agree on joint, coordinated decisions. For example, power companies
have to agree on how to use the power grid, and airlines have to agree on
how to schedule takeoffs and landings.
Mechanisms for social choice are called incentive-compatible when cooperative
behavior is optimal for all parties. The most well-known examples of incentive-compatible
mechanisms are auctions. However, the party that receives the auction revenue
has an incentive to manipulate the outcome to increase the revenue. For example,
a power grid operator has an interest to reduce capacity and drive up prices.
I present a novel mechanism for social choice that is incentive-compatible
without generating a payment surplus. I give several examples of applications
where it solves the social choice problem without unwanted incentives, and
provides significantly better overall utility than any other known mechanism.
I also discuss how such mechanisms could be used to improve social choice
problems using multi-agent systems.
EMPATH: A Computational Model of Human Facial Expression
Recognition, by: Dr.Matthew Dailey, University of California, San Diego,
Wednesday, May 12, 2004, 10:00, CSIM #209.
Understanding how humans (consciously or unconsciously) communicate emotion
through facial expressions is important for both the design of advanced human-machine
interfaces and the field of psychology. Some machines, such as care-giving
robots, would be more effective if they could modify their behavior based
on their user's emotions as induced by a computational model. For psychology,
computational models can potentially clarify how different theories about
facial expression recognition might be implemented in the brain, thus sharpening
the long, intense debate in the community.
This talk, will describe EMPATH, a system that makes progress on both fronts.
EMPATH is a statistical model that learns, through training, to classify static
facial expressions as either happy, sad, afraid, angry, surprised, or disgusted.
We find, somewhat surprisingly, that relatively simple yet biologically plausible
image processing and machine learning techniques yield a system that performs
as well as naive humans in psychological experiments. But even more surprisingly,
the model also matches a large variety of psychological data on categorization,
stimulus similarity, reaction times, stimulus discrimination, and recognition
difficulty, both qualitatively and quantitatively. These results suggest that
many of the seemingly complex psychological phenomena related to facial expression
perception need not be seen as so complex after all.
Social Agents in Digital Cities, by: Prof. Toru
Ishida, Department of Social Informatics, Kyoto University, Japan.
Thursday, Jan. 29, 2004, 14:00, CSIM #209.
The research community tackling agents and multiagent systems has studied
and developed various agents. Typically, personal agents belong to humans
and help their users to operate complex computer/communication systems. In
the area of multiagent systems, though computational agents are designed to
interact (both collaborate and compete) with each other, interaction among
humans and computational agents has not been studied intensively. In this
talk, however, I focus on social agents that can be members of a human community:
social agents support human-human interactions, while personal agents support
human-computer interactions. To understand the nature of social agents, we
need a research platform to play with them. This talk proposes a scenario
description language called Q for designing interactions among a large number
of agents, and its application to digital cities and crisis management simulations.
We started a five years project to develop social agents for supporting social
interactions in digital cities.