TimeMachineBoard: A Casual Meeting System Capable of Reusing Previous Discussions

PDF
Kentaro ISHITOYA
Shigeki OHIRA
Center for Information Media Studies, Nagoya University
Katashi NAGAO
Dept. of Information Engineering, School of Engineering, Nagoya University

1 Introduction

Organizations have become knowledge-oriented, with high value placed on knowledge and its transmittance. The ways that how to extract knowledge from knowledge workers' various organizational activities are being studied. Current knowledge management systems can only handle computerized digital information. However, knowledge workers participate in diverse off-line activities when they meet other people and share time and space with them in the real world.

We have been researching knowledge discovery resulting from meetings , which are ordinary knowledge-intensive activities in the real world. One type of meeting we focus on is the face-to-face (not online) meeting, which is easily held anywhere and anytime. We call this type of meeting ``a casual meeting'' and are developing technologies to support it.

Ideas and comments from participants during casual meetings are a mixture of good and bad and are not always helpful. However, all parts of a discussion become background information for future knowledge activities. There are many benefits to remembering past knowledge activities. For example, past discussions can be recalled and compiled to create a new idea, or composite ideas from past discussions can enhance the participants' depth of knowledge.

In a casual meeting, recording what was discussed can be difficult. Therefore, the casual meeting has problems. For instance, it is easy to forget and therefore lose some of the conceptual relationships formed during previous discussions, and the discussions have no explicit relevance to the conclusions.

There have been several studies on recording whiteboard contents from casual meetings. The focal point of this work is user interfaces. In this paper, we look at the casual meeting as a collaborative activity of participants to create reusable content that indicates the discussion content, and we developed a system to assist in this.

With the advance of technology, projectors and large displays have become inexpensive and easy to obtain. We have come to assume that there will be two or more displays in the meeting environment. We can cooperatively use such multiple displays for presentations. For instance, during the meeting, we can draw a figure on a large display that is appropriate for freehand drawing, and we can copy and rearrange such drawings on a larger screen that is appropriate for looking over information.

We developed a casual meeting system for a multi-user and multi-display environment called the TimeMachineBoard. Using the system, we can precisely record the content of discussions at casual meetings. We can easily accurately refer to the previous discussions again during or after the meeting.

We describe the TimeMachineBoard, then we discuss our methods for recording, retrieving, and quoting the content of casual meetings.

2 TimeMachineBoard:A Novel System for Casual Meetings

  • small group, two to five people

  • uses tools such as whiteboard, blackboard, or flip-charts if needed

  • does not require preparation

  • does not require chairpersons, secrateries or facilitators

In this study we focused on casual meetings that have the following characteristics.

2.1 Concept of the system

The system must not interrupt the discussion. Therefore, we designed the system to collect implicit information as automatically as possible and to assist with things that only humans can do as efficiently as possible.

Research focused on real-world interactions in meetings faces the challenge of recording the meeting participants' words, expressions, and gestures as multimedia data, including audio and video. Due to this, many types of equipment are often required to effectively record casual meetings in detail. Our system decreases the amount of equipment needed so that systems can be set up in various places, they are designed to be lightweight.

We constructed an infrared(IR) based set-up to keep track of who is accessing which part of each display in participants' collaborative activities to create a discussion content in a multi-display environment. Moreover, according to Ju's research , participants conduct discussions with implicit activities like moving closed to the display to access the board directly and stepping away from display to look over the content of the discussion. So, we provide two devices to support these two activities: pen devices to enable direct manipulation at the board and pointer devices to enable access to the board from a distance.

With this concept, our system can transfer text and images, including drawings, and display them on multiple displays. Participants can create reusable discussion content and look over the discussion at the same time while collaboratively swapping roles.

2.2 System Overview

Appearance of the system, with large-display

Fugure1: Appearance of the system, with large-display

Sticky image transfer window

Fugure2: Sticky image transfer window

Pointer(left) and pen(right)

Fugure3: Pointer(left) and pen(right)

Our system is shown in Figure . Participants are able to choose one or more displays for their purpose. The large display is appropriate for handwriting, and the projector screen is suitable for classifying and arranging information to look it over. The functionalities of our system are below.

【Information transfer and display function】We provide client software, called Sticky, to transfer images and text to the selected display. We call these transferred objects ``display objects.''

A Sticky image display object transfer window is shown in Figure . TimeMachineBoards active in the current meeting environment are listed in a drop-down menu at the top of the window. Below of the list, there are ``Text'' and ``Image'' tabs to choose the transfer object type.

Integrating display objects into a talk facilitates understanding of the discussion. Display objects' information can be a search index of the discussion. Moreover, relationships between discussions can be extracted by transferring display objects between multiple displays.

【Pen function】The pen device shown in Figure enables participants to draw figures, directly move or scale display objects on the large display, or underline text.

【Pointer function】The pointer device shown in Figure enables participants to point to information on the large display or projector screen. Like the pen device, it can be used to underline text and to move and scale display objects.

【Discussion segmentation function】This function enables participants to use the pointer to define start and end points to segment the current discussion.

【Search and quotation function】This function enables participants to search past discussions and display the results on the large display or projector screen and quote part of a past discussion in a current discussion using the pen or pointer.

Our system is designed to support casual meetings that have these characteristics.

In this section, we will give the concept and overview of the casual meeting system called the TimeMachineBoard and explain how it records information discussed at meetings.

2.3 Recording Casual Meetings

  • sounds of discussion

  • background images of board

  • pen strokes (including handwritten figures, arrows, rectangles, and others)

  • pointing information

  • pointer strokes

  • display objects transferred via Sticky

  • arrangement, scaling, deletion and editing of display objects

  • information underlined in display objects

  • segmentation of discussion

  • search histories

  • reuse histories

  • meeting information (start/end time, place, participants)

As described above, there are many benefits to remembering other participants' ideas and comments from past knowledge activities.

We designed a structuration method to record casual meetings with focus on their elements and on extracting index information from participants' natural activities. Our system records the following elements and structurizes them.

In this paper, we call the recorded content of a meeting ``discussion content''. Discussion content created by participants' collaborative activity is temporal content including spatial information of display objects and handwritten strokes, timeline information of sounds, and operation history information. Data from casual meetings is accumulated in the Web database as discussion content that can be accessed through a network.

As described above, our system achieves whiteboard functionality electronically with a pen. Moreover, we achieved pointing, segmentation of discussion, and discussion searching and reusing functions that are impossible to do with a conventional whiteboard. This solves the problems of casual meetings.

We describe the system architecture in the next section.

System Architecture

Our system's architecture is shown in Figure . The system has two components. One is a meeting environment component that directly assists participants, the other is a backend component of which participants are not conscious.

System architecture

Fugure4: System architecture

The meeting environment component is structured with a large display and a projector screen as displays and a pen, pointer, and Sticky as input devices. Multiple displays in the environment can be used in cooperation. Each display is connected to its own computer running TimeMachineBoard's basic program.

Meeting Environment Components

First, we will describe the meeting environment components. For the meeting environment, participants can choose from two kinds of tools: a large display for drawing figures or a projector screen for classifying or arranging information. The only difference between the two displays is in pen usability. Generally, a projector screen is too large to capture with an IR camera. In addition, drawing figures is difficult because participants cast a shadow when they are near the projector screen.

IR bar for large display

Fugure5: IR bar for large display

IR camera for pen recognition

Fugure6: IR camera for pen recognition

We use a Wiimote as the system remote controller. It has 12 buttons, including power, A, B, direction, minus, home, plus, and two numbered buttons. It also has an IR camera that can recognize four IR LED points. Moreover, the Wiimote can communicate through Bluetooth, so we can use Wiimote in various ways in our system, e.g., as a remote controller, pointing device, or pen.

Now we describe the principle of the pointer action. As mentioned above, we use the Wiimote as a pointing device. We allocate one Wiimote with a unique ID to each participant. We set an IR bar shown in Figure above the displays. This IR bar is a base point for coordinates captured by an IR camera for processing information from a pointing device or pen. The IR bar can output a unique ID, called an IRID, by a blinking LED. The IRID can be decoded by an IRID decoder, which is connected to the Wiimote's extension port shown at the left in Figure . Then, the pointer server can capture a Wiimote's unique ID, an IRID recognized by the decoder, and the IR bar's point coordinates from the Wiimote. The pointer server will send a UDP packet to all connected clients. Each client checks whether the received packet is for itself and then processes it accordingly.With this system, we can keep track of who is pointing to which part of which display.

We next describe the pen architecture. As described above, there is an IR LED on the back of the pen. When the pen is pointed at the large display, the IR LED will turn on. To draw figures on the large display using the pen, the system must recognize the coordinates of the pen. In our system, we use an extension of Lee's method to recognize the pen's coordinates. First, the IR camera is placed in midair in front of the large display for pen recognition as shown in Figure . The IR camera always recognizes the IR bar's coordinates as a base point. When the pen is pointed towards the display, the IR camera will recognize two points. The base point is fixed at the top of the camera coordinates, so we can determine that the pen point is the bottom one. Then, the participant calibrates his or her pen coordinates from the camera coordinates to the display coordinates before receiving a packet from the pointer server. Through this process, we can draw figures on the large display as the pen coordinates are recognized in realtime. Moreover, the pen's IR LED can output a unique ID by blinking. We assign the same unique ID to each participant's pen and pointer, so we can tell who is drawing the figure.

The backend component includes a database server that accumulates all the information about the meeting, an index server that controls all programs in the environment, such as TimeMachineBoards or Stickies in the environment, and a pointer server that manages all the pointers, pens, and IR cameras in the environment.

Backend Component

The backend component for assisting in a meeting environment of the TimeMachineBoard is structured with a Web database server to record and retrieve discussion contents, an index server to assist in cooperation among software applications in the environment, and a pointer server to manage the IR devices.

System of pointer server

Fugure7: System of pointer server

【Web database server】The Web database server is a system for recording information captured from each meeting environment. Our system supports meetings with multiple displays. There may be demand to retrieve, play, and reuse past discussion content from any TimeMachineBoard, so we must not store discussion content on individual TimeMachineBoards. Instead, we must store discussion content in a location that is accessible from any TimeMachineBoard. Moreover, we must consider the accessibility of the discussion content from other systems. Therefore, we designed the database as a Web database that has an API for searching and inserting information and that can be accessed from anywhere on a network via HTTP.

【Index server】The index server supports cooperation among software applications in a meeting environment. There are at least two types of software in a meeting environment, TimeMachineBoards for displaying information and Sticky for transfering text or images to the TimeMachineBoard. When a participant selects display objects on the TimeMachineBoard, the selected information is transferred to Sticky, where the participant can then delete or edit the selected display object. To achieve this software cooperation, we established a peer-to-peer network with the index server and treated each TimeMachineBoard and Sticky as a peer of the network.

【Pointer server】

The pointer server can manage Wiimote devices in the environment and can process and derive coordinate information that has been captured by Wiimote's IR camera. The pointer server's system structure is shown in Figure . In a meeting environment, there are multiple IR cameras, participants' pointer Wiimotes, and Wiimotes for recognizing the pointing of a pen. Communication between the pointer server and a Wiimote in the environment is established with Bluetooth. The pointer server has software called Wiimote Detector to detect the existence of a Wiimote in the environment. When a Wiimote that has been registered with the Wiimote Detector powers on, the Wiimote Detector will automatically find it and connect with the server.

As described above, the pointer server captures information from connected Wiimotes. Captured information includes the LED bar's coordinates, the IRID from the IR decoder, and the Wiimote's unique ID. The pointer server sends UDP packets to all connected clients, and then the TimeMachineBoard checks whether the packet is destined for itself, if so processes it. In addition, the TimeMachineBoard checks whether the packet from the Wiimote is for pointer or for pen recognition by checking their unique IRIDs. We can use the Wiimote as a pointing device that interacts with a multi-display environment with this chain of processes.

3 Quotation of Discussion Content

  1. start a new discussion

  2. search past discussions

  3. review past discussions

  4. select and copy image/text display objects

  5. return to present discussion

  6. paste display objects into display for present discussion

Retrieving and quoting previous discussion content during a casual meeting would facilitate more efficient meetings.

Casual meetings are held regardless of the purpose, time, place, or subject and may be held spontaneously, so it is not always possible to prepare printed handouts. In this environment, a meeting system must be able to share the context of the discussion among all meeting participants.This would enable participants to retrieve and review past discussion content on demand in a casual meeting.

A query is needed to search past discussion content, but to think about and input a query string would interrupt the discussion. Therefore, we used the query-free search method developed by Henzinger et al. . We provide a query-free search system that generates a query string from the currently available display objects, participants' information, and information underlined in the text. Search results are then shown on the large display or projector screen as a finished state of the discussion. If needed, the search results can be played back. These processes enable participants to share the background information they have in mind with each other and make meetings more effective.

Moreover, not only retrieving but also reusing past discussions in the present discussion would make meetings more efficient. In our system, we provide functionality to quote display objects (Figure ). When a search result is displayed, participants are able to select drawings and display objects and quote them in the current discussion if they wish.

Quoting previous discussion content means selecting and copying text or image display objects that are a part of the previous discussion content for use in the current discussion.

The operation flow for quoting discussion content is

Concept of quotation

Fugure8: Concept of quotation

One of the advantages of reusing past discussion content is the reduction of time and effort needed to transfer the same images and texts to the display. In this way, we can improve the coherence of the meeting. Moreover, the accumulated search and quotation histories linked with discussion content will show the relationships of the current and past discussions. Reviewing knowledge activities through our system will make participants' activities more efficient. In addition, reviewing not entire past discussions, but only the most relevant parts of those past discussions and quoting and arranging them in a meeting enables efficient consideration of past discussions.

The relationship information among discussions indicates important background information for future knowledge activities, that is difficult to extract by using existing knowledge management systems. This relationship information can be used to assist in knowledge workers' activities , for example visualizing relationships as a graph to knowledge workers to over look their daily knowledge activities. Reviewing previous activities and their background information would assist their ongoing activities. However, detailed description of this topic is out of the scope of this paper.

4 Conclusion and Future Work

We have proposed a system to support information recording at casual meetings. We have also developed a system to support participants' collaborative activity to creation of discussion content with multimodal data, such as handwritten figures and strokes and transferred display objects. The ability to search and quote previous discussion content will contribute to more efficient casual meetings.

Our future work includes development of a more efficient and flexible quotation system, evaluation of long-term operation, and deployment of the system for enhancing the performance of participants' knowledge activities.