A Community Support System Featuring Online Video Contents

PDF
Daisuke YAMAMOTO
Dept. of Media Science, Graduate School of Information Science, Nagoya University
Shigeki OHIRA
Ecotopia Science Institute, Nagoya University
Katashi NAGAO
Center for Information Media Studies, Nagoya University

1 Introduction

In recent years, we can handle video contents on the Internet flexibly with developing the broadband network. Although we can watch video contents on the Internet, the advantage of interactive of the Internet has not been so used. In our earlier study, we proposed the system that we can easily make annotation for video contents by using a web browser and we developed the some applications, such as web video retrieval system, by using these annotations. We propose the mechanism that contents and its community are activated by taking the mechanism of Weblog to the video contents.

2 Weblogize Online Video Contents

Weblog is the mechanism that users syndicate their articles, it is called "Entry", efficiently on the Internet. Weblog has some mechanism, such as users annotate some comments, users link another entries by trackback, syndicating RSS (RDF Site Summary) which is a structure that distributes updating information for each entry. The community that centers on the Weblog entry can be activated by each entry's linking mutually as trackbacks by using these mechanisms. In this research, we proposes the mechanism that activate the video contents and its community by replacing the Weblog entry with the video contents (or its scenes). We call this process ``Weblogize,'' and call the Weblogized video content ``Videoblog.''

2.1 Web-based Video Annotation

As the first mechanism for Weblogize, the mechanism that users can easily make comments for video contents is necessary. Here, we use the mechanism of iVAS (intelligent Video Annotation Server), which is our former research.

Users can watch arbitrary video contents that can be accessed from network and make annotation for these. We proposed three kinds of annotations, which are the electronic bulletin board style (figure 1). There is a text annotation which is the mechanism that users can make comments for arbitrary scenes of video contents, and a impression annotation which is the mechanism that users can associate video content with subjective impressions for content by clicking a mouse, and a evaluation annotation which is the mechanism that users can evaluate each text annotation by clicking O-X buttons.

We can communicate with other users about video contents by using these mechanisms, and we developed the video retrieval system using these annotations of video contents.

An example of iVAS Annotation

Fugure1: An example of iVAS Annotation

2.2 Trackback

As the second mechanism for Weblogize, we take the mechanism of the trackback to iVAS. The trackback is the structure that automatically links reference points and the reference origin of an entry. Unlike mutual links in the sites, we can link between entries that depend more heavily on contents. Furthermore, since this link is stretched by human subjectivity, it is stronger link which was conscious of the contents.

The structure of the Videoblog trackback is the same as that for a Weblog, and is organized by sending a trackback ping. The trackback ping's destination URL ping is described as follows, specifying the target ID of contents, and the start and end times of the scenes.

Therefore, the Videoblog system can connect to the Weblog’s trackback network without adding any alterations to the existing Weblog.

3 Trackback-based Video Annotation

As mentioned above, we introduced the mechanism of the trackback into the video contents. The content of the entry on the trackback origin is related to the scene of the video contents linking by trackback. So we can treat it as an annotation. If there is an enough profit in which users link by a trackback, we can acquire the video annotation in an extremely low cost. We propose the following video article interfaces as one means for that.

3.1 Semi-Automatic Generation of Weblog Entry

A user watches video content by using iVAS. Because the annotation in this interface making in real time when the user watches video contents, we cannot necessarily acquire the enough annotation. However, when it is good contents for the user, the user has the desire that he wants to write the article about this contents. If the user wants to write some articles, we can expect that the user make convenient annotation such as an impression annotation.

Then, the system extracts the target scene of the video article from the annotation that the user makes by using iVAS in real time. The system recommends interesting scenes for the user, and offers the template to write the entry of Weblog for the video contents. Moreover, the system automatically associates a user's Weblog entry with interesting scenes for the user by linking trackbacks. The user describes the entry for these scene based on this recommendation result.

The mechanism of the recommendation of the target scene of the video article is roughly as follows. First, we introduce interest level as a degree by which how much interest the user has for the scene. The interest level rises according to how the user makes annotations to the scene. Specifically, the system calculates so that the interest degree of The scene where the user described the text annotation and makes a lot of impression annotations may rise. And, the system recommends scenes n in order with a high interest level. In addition, the system recommends the scene where the interest is larger than other users. Because we should make the number of annotations of each scene even.

3.2 Modification of Video Article

The user can correct the video article generated semi-automatically as well as writing a usual Weblog entry. This correction result is reflected in iVAS as text annotations, and used to give the accuracy of applications such as a video retrieval system. The item that can be corrected is the following. First is a correction of the range of the selection of the target scene of the annotation. We can correct the beginning cutting and the end cutting of the scene. Second is a correction of a text. Figure shows the example of the screen. The system records the corrected result in the XML data base as an annotation of the superscription attribute. As a result, we can acquire the high reliability annotation corrected by human's hand.

Weblog Editing Page

Fugure2: Weblog Editing Page

4 Conclusion

This system can link the video contents, Weblog, and the user. These mechanisms have two advantages. The system promotes the distribution and the activation of video contents, according to associating video contents with Weblogs. In addition, we can make the best use of these annotations for an existing research, such as annotation based video retrieval or annotation based video summarization.