Header-Yokohama Header-Yokohama-Mobile


 Competition

ANAC

Introduction

The Automated Negotiating Agent Competition (ANAC) is an international tournament that has been running since 2010 to bring together researchers from the negotiation community. ANAC provides a unique benchmark for evaluating practical negotiation strategies in multi-issue domains and has the following aims: to provide an incentive for the development of effective and efficient negotiation protocols and strategies for bidding, accepting and opponent modeling for different negotiation scenarios;to collect and develop a benchmark of negotiation scenarios, protocols and strategies; to develop a common set of tools and criteria for the evaluation and exploration of new protocols and new strategies against benchmark scenarios, protocols and strategies;to set the research agenda for automated negotiation.

The previous competitions have spawned novel research in AI in the field of autonomous agent design which are available to the wider research community.

This year, we introduce five different negotiation research challenges: Agent Negotiation and Elicitation (GeniusWeb framework), Human-Agent Negotiation (IAGO framework), Werewolf Game (AIWolf Framework), Supply Chain Management (NegMas framework), HUMAINE (HUman Multi-Agent Immersive NEgotiation).

We expect innovative and novel agent strategies will be developed, and the submitted ANAC 2020 agents will serve as a negotiating agent repository to the negotiation community. The researchers can develop novel negotiating agents and evaluate their agents by comparing their performance with the performance of the ANAC 2020 agents.

Competition Schedule

To be announced.

Website URL

The competition webpage has been released at http://web.tuat.ac.jp/~katfuji/ANAC2020/

IQ test competition

Introduction

The automated IQ test competition is a test that contains three major categories of IQ test including verbal comprehension, diagram reasoning and sequence reasoning. All questions are collected manually from genuine real IQ tests for human-beings.

Participants are required to develop AI programs to solve these problems automatically, providing some given dataset.

Human players are also encouraged to participate the test as well.

Competition Schedule

To be announced.

Website URL

The competition webpage has been released at https://www.iqtest.pub/

Angry Birds AI competition

Introduction

Now that Go has been solved, the next challenge for AI is the game of Angry Birds. Unlike Go and other supposedly difficult games, Angry Birds has an infinite number of possible actions to chose from and the exact outcome of each action is unknown in advance. This makes the game very hard for computers to master. It adds to the diculty that computers have to play live games in the same way humans play them, without any additional information about the exact physics of the game. The worlds best Angry Birds AI agents will meet during the IJCAI 2019 conference and compete for the title of AIBIRDS 2019 World Champion. IJCAI 2019 will also host the annual Angry Birds Man vs Machine Challenge where we test if AI can already beat the best human players. If you want to participate in this exciting competition, please go to https://aibirds.org/ for more details.

Competition Schedule

To be announced.

Website URL

The competition webpage has been released at https://aibirds.org/

Mahjong

Introduction

In this competition, your task is to develop an intelligent Mahjong agent that is able to compete with other agents as well as human players on the online AI platform, Botzone. We adopt Mahjong Competition Rules (MCR) in this challenge. We provide a sample program for Mahjong beginers. You are also provided with our judge program which can help you learn the rules of MCR and debug your AI. The final winner after two formal rounds will be the champion of the competition.

Competition Schedule

To be announced.

Website URL

The competition webpage has been released at https://botzone.org.cn/static/gamecontest2020a.html

Abstractive Short Text Summarization (AbSTS) Competition

Introduction

Abstractive text summarization (AbTS) is a very challenging task in natural language processing (NLP). Two bottlenecks greatly limit the research on AbTS: 1) the lacking large scale corpus, 2) the difficulty for objective evaluation. The latter is a general issue for most of the natural language generation (NLG) tasks. To abate the former issue, we had published a million-scale of the abstractive summary corpus for short text in 2015, namely, the LCSTS corpus. Up to now, hundreds of NLP teams all over the world conduct their works with this corpus, which makes a very solid foundation of research on AbTS. With the development of language generation techniques and the increasing need for abstractive summarization, AbTS is becoming a hotspot topic among NLP tasks. Nevertheless, comparing with other NLP tasks such as reading comprehension, image caption generation, machine translation, etc., the AbTS research still needs to be greatly boosted.

The Abstractive Short Text Summarization (AbSTS) Competition is purposed to provide an open platform of showing the most recent achievements on AbSTS techniques and to appeal to more NLP researchers working on AbTS. Since writing a good text summary or highlight sentence for a given text is also a challenging task for article writers, this competition may abstract not only NLP researchers but also a very wide range of audiences outside of the technique community.

The main task of the AbSTS Competition is generating highlight sentence(s) for a given Chinese short text with a maximum length of 140 Chinese characters. This highlight sentence(s) must be generated by generated rather than extracted from the original text and should cover the main information of the given text. Both quantitative and qualitative measures will be used at the final stage of the evaluation. The ROUGE measures will be used for quantity evaluation. For the quality evaluation, we will manually evaluate the top 5 or 10 systems according to their quantitative scores. The competition consists of two stages:

1) Model training stage: during this stage, the participants will train and optimize their systems. To support the participants training their models, the Large-scale Chinese Short Text Summarization (LCSTS) dataset is provided. This corpus is automatically constructed from the Chinese microblogging website Sina Weibo for promoting the research on automatic text summarization. This corpus consists of over 2 million real Chinese short texts with short summaries given by the writer of each text. There are 10,666 manually tagged short summaries with their corresponding short texts that will be provided as the development set. There is another test set without summaries provided to participants, they can submit their abstractive summaries for this test set each day. The quantitative evaluation will be conducted on this test set, and a leaderboard based on the quantitative evaluation results will be updated daily.

2) Finality stage: during this stage, a different test set will be provided to participants. One day is given to participants to run and submit their final results for the new test set. The quantitative evaluation is firstly conducted for submitted systems, and then qualitative evaluation is run for top-ranked systems. Final competition scores for the top-ranked systems are calculated by combining both quantitative and qualitative scores. For other systems, only quantity scores are counted.

Competition Schedule

To be announced.

Website URL

To be announced.

NetML Challenge 2020

Introduction

Recent progress in AI, Machine Learning and Deep Learning has demonstrated tremendous success in many application domains such as games and computer vision. Meanwhile, there are challenges of proliferating data flows and increasing malicious traffic on today’s Internet that call for advanced network traffic analysis tools. In this competition, we challenge the participants to leverage novel machine learning technologies to detect malicious flows and/or distinguish applications in a fine-grained fashion among network flows. This NetML Challenge 2020 is the 1st of the Machine Learning Driven Network Traffic Analytics Challenge. In this year’s challenge, a collection of 1,318,976 flows in three different datasets are given including detailed flow features and labels. We also provide simple APIs and baseline machine learning models to demonstrate the usage of the datasets and evaluation metrics. There are 7 tracks for specific analytics objectives.

This challenge is organized by the Laboratory of Advanced Computer Architecture and Network Systems (ACANETS) at the University of Massachusetts Lowell, and sponsored by Intel Corporation.

Competition Schedule

To be announced.

Website URL

To be announced.

iCartoonFace

Introduction

Nowadays, the cartoon industry shows explosive growth, and more and more cartoon videos are produced every year. But how to automatically understand the content of these videos, especially the information of cartoon characters, is becoming more and more important. To promote the identification of cartoon characters and deepen understanding of cartoon video content, we held this iCartoonFace challenge. With the proposals of numerous large datasets, deep learning approaches have achieved remarkable progress in the field of face recognition and surpassed the human annotation performances. Nevertheless, the performance of deep learning approaches heavily rely on the availability of large training dataset. In this less-explored task of cartoon recognition, few datasets have been proposed for specific purposes, which can be roughly grouped into two categories. The first category is the caricature dataset, which is fundamentally based on real human identities. These caricatures images share strong similarities with the human portrait but exaggerate certain specific facial features. The second category focuses on the cartoon recognition task. However, this part datasets are either too small or too noisy and do not meet the demand of large-scale training data for deep learning approaches. Therefore, to enable the cartoon media understanding by learning approaches, the establishment of a large-scale, challenging cartoon dataset is in high demand.

Competition Schedule

To be announced.

Website URL

The competition webpage has been released at http://challenge.ai.iqiyi.com/detail?raceId=5def71b4e9fcf68aef76a75e