ConvLab

ConvLab简介

ConvLab是微软美国研究院和清华合作开发的一款开源的多领域端到端对话系统平台,它包括一系列的可复用组件,比如传统的管道系统(pipline systems:包括多个独立步骤的对话系统)或者端对端的神经元模型。

安装

下载ConvLab-2项目,并安装相关的依赖包:(这一步之前需要先安装下pytorch。)

git clone https://github.com/thu-coai/ConvLab-2.git && cd ConvLab-2 && pip install -e .

接着安装下en_core_web_sm来解决BERTNLU出现的错误:

python -m spacy download en_core_web_sm

ConvLab的使用

在这里使用适用于数据集MultiWOZ的模型。整个pipeline agent包括NLUDSTPolicyNLG模块。(这些模块的含义可以参考:博客)

1 build an agent(创建智能体)

首先导入一些模型和包:

1
2
3
4
5
6
7
8
9
10
11
from convlab2.nlu.jointBERT.multiwoz import BERTNLU
from convlab2.nlu.milu.multiwoz import MILU
from convlab2.dst.rule.multiwoz import RuleDST
from convlab2.policy.rule.multiwoz import RulePolicy
from convlab2.nlg.template.multiwoz import TemplateNLG
from convlab2.dialog_agent import PipelineAgent, BiSession
from convlab2.evaluator.multiwoz_eval import MultiWozEvaluator
from pprint import pprint
import random
import numpy as np
import torch

接着,生成模型并创建agent:

1
2
3
4
5
6
7
8
9
10
11
# go to README.md of each model for more information
# BERT nlu
sys_nlu = BERTNLU()
# simple rule DST
sys_dst = RuleDST()
# rule policy
sys_policy = RulePolicy()
# template NLG
sys_nlg = TemplateNLG(is_user=False)
# assemble
sys_agent = PipelineAgent(sys_nlu, sys_dst, sys_policy, sys_nlg, name='sys')

结束。

利用agent的response(响应)函数尝试和刚刚创建的agent"聊天":

1
sys_agent.response("I want to find a moderate hotel")
1
sys_agent.response("Which type of hotel is it ?")
1
sys_agent.response("OK , where is its address ?")
1
sys_agent.response("Thank you !")
1
sys_agent.response("Try to find me a Chinese restaurant in south area .")
1
sys_agent.response("Which kind of food it provides ?")
1
sys_agent.response("Book a table for 5 , this Sunday .")

2 创建一个模拟器(simulator)和agent交流并评估

在很多一对一任务型对话系统中,模拟器对训练一个RL(强化学习) agent十分重要。在ConvLab框架里没有区分用户和系统,所有的说话人都是agents。模拟器(simulator)同样是一个agent,只是拥有为完成用户目标的特定策略。

在这里为模拟器(simulator)使用Agenda策略,这个策略需要对话动作(dialog act)作为输入,这也就意味着需要将PipelineAgent中的DST参数置空。(更多细节可以参考PipelineAgent文档)

1
2
3
4
5
6
7
8
9
10
# MILU
user_nlu = MILU()
# not use dst
user_dst = None
# rule policy
user_policy = RulePolicy(character='usr')
# template NLG
user_nlg = TemplateNLG(is_user=True)
# assemble
user_agent = PipelineAgent(user_nlu, user_dst, user_policy, user_nlg, name='user')

到现在已经有了一个agent和一个simulator,接下来会用到BiSession这个已有的简单一对一对话控制器。为了实现自己的特殊需要可以自己定义一个Session类。同时为了评估性能,增加了MultiWozEvaluator,它利用解析后的dialog act input和policy output dialog act来计算inform f1(是否提供合适的实体)、book rate(预定率)、success(是否成功)。

1
2
evaluator = MultiWozEvaluator()
sess = BiSession(sys_agent=sys_agent, user_agent=user_agent, kb_query=None, evaluator=evaluator)
接下来就让这两个agent来对话。重点BiSession类中的next_turn函数。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def set_seed(r_seed):
random.seed(r_seed)
np.random.seed(r_seed)
torch.manual_seed(r_seed)

set_seed(20200131)

sys_response = ''
sess.init_session()
print('init goal:')
pprint(sess.evaluator.goal)
print('-'*50)
for i in range(20):
sys_response, user_response, session_over, reward = sess.next_turn(sys_response)
print('user:', user_response)
print('sys:', sys_response)
print()
if session_over is True:
break
print('task success:', sess.evaluator.task_success())
print('book rate:', sess.evaluator.book_rate())
print('inform precision/recall/f1:', sess.evaluator.inform_F1())
print('-'*50)
print('final goal:')
pprint(sess.evaluator.goal)
print('='*100)
> 问题:next_turn是如何实现的?---找到了代码如下,并根据自己的理解作了相应的注释:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def next_turn(self, last_observation):
"""Conduct a new turn of dialog, which consists of the system response and user response.
The variable type of responses can be either 1) str or 2) dialog act, depends on the dialog mode settings of the
two agents which are supposed to be the same.

Args:
last_observation:
Last agent response.
Returns:
sys_response:
The response of system.
user_response:
The response of user simulator.
session_over (boolean):
True if session ends, else session continues.
reward (float):
The reward given by the user.
"""
## 根据上一轮中最后系统agent的response生成这一轮user的response
user_response = self.next_response(last_observation)

## 如果evaluator存在 把系统agent和user agent的dialog act存到dialog act的数组中去
if self.evaluator:
self.evaluator.add_sys_da(self.user_agent.get_in_da())
self.evaluator.add_usr_da(self.user_agent.get_out_da())
## 判断对话是否结束
session_over = self.user_agent.is_terminated()

## 如果是流水线模型(拥有dst),将当前session状态给到系统agent
if hasattr(self.sys_agent, 'dst'):
self.sys_agent.dst.state['terminated'] = session_over
# if session_over and self.evaluator:
# prec, rec, f1 = self.evaluator.inform_F1()
# print('inform prec. {} rec. {} F1 {}'.format(prec, rec, f1))
# print('book rate {}'.format(self.evaluator.book_rate()))
# print('task success {}'.format(self.evaluator.task_success()))
## 这是上两句对话的user reward
reward = self.user_agent.get_reward()
## 根据user的内容生成系统agent的回应
sys_response = self.next_response(user_response)
## 增加对话内容到对话历史里面去
self.dialog_history.append([self.user_agent.name, user_response])
self.dialog_history.append([self.sys_agent.name, sys_response])

return sys_response, user_response, session_over, reward

3 尝试不同的模型组合

流水线式的对话系统可以选择不同的模型进行组合,其中包含联合模型(joint model):Word-DST(包含NLU和DST)和Word-Policy(包含Policy和NLG)。可选模型如下:

  • NLU: BERTNLU, MILU, SVMNLU
  • DST: RuleDST
  • Word-DST: SUMBT, TRADE (set sys_nlu to None)
  • Policy: RulePolicy, Imitation, REINFORCE, PPO, GDPL
  • Word-Policy: MDRG, HDSA, LaRL (set sys_nlg to None)
  • NLG: Template, SCLSTM

除了流水线式的对话系统,还有一些端到端的模型(即一个模型来实现上面流水线式的多个模型组合):

  • End2End: Sequicity, DAMD, RNN_rollout (directly used as sys_agent)

最后还有simulator的策略:

  • Simulator policy: Agenda, VHUS (for user_policy)

代码示例:

首先导入相关的包:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from convlab2.nlu.svm.multiwoz import SVMNLU
from convlab2.nlu.jointBERT.multiwoz import BERTNLU
from convlab2.nlu.milu.multiwoz import MILU
# available DST models
from convlab2.dst.rule.multiwoz import RuleDST
from convlab2.dst.sumbt.multiwoz import SUMBT
from convlab2.dst.trade.multiwoz import TRADE
# available Policy models
from convlab2.policy.rule.multiwoz import RulePolicy
from convlab2.policy.ppo.multiwoz import PPOPolicy
from convlab2.policy.pg.multiwoz import PGPolicy
from convlab2.policy.mle.multiwoz import MLEPolicy
from convlab2.policy.gdpl.multiwoz import GDPLPolicy
from convlab2.policy.vhus.multiwoz import UserPolicyVHUS
from convlab2.policy.mdrg.multiwoz import MDRGWordPolicy
from convlab2.policy.hdsa.multiwoz import HDSA
from convlab2.policy.larl.multiwoz import LaRL
# available NLG models
from convlab2.nlg.template.multiwoz import TemplateNLG
from convlab2.nlg.sclstm.multiwoz import SCLSTM
# available E2E models
from convlab2.e2e.sequicity.multiwoz import Sequicity
from convlab2.e2e.damd.multiwoz import Damd

自然语言理解(NLU)和对话状态追踪(DST)或者Word-DST:

1
2
3
4
5
6
7
8
9
10
# NLU+RuleDST:
sys_nlu = BERTNLU()
# sys_nlu = MILU()
# sys_nlu = SVMNLU()
sys_dst = RuleDST()

# or Word-DST:
# sys_nlu = None
# sys_dst = SUMBT()
# sys_dst = TRADE()

策略(Policy)和自然语言生成(NLG)或者Word-Policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Policy+NLG:
sys_policy = RulePolicy()
# sys_policy = PPOPolicy()
# sys_policy = PGPolicy()
# sys_policy = MLEPolicy()
# sys_policy = GDPLPolicy()
sys_nlg = TemplateNLG(is_user=False)
# sys_nlg = SCLSTM(is_user=False)

# or Word-Policy:
# sys_policy = LaRL()
# sys_policy = HDSA()
# sys_policy = MDRGWordPolicy()
# sys_nlg = None

将上述模型集成为一个流水线式系统agent:

1
sys_agent = PipelineAgent(sys_nlu, sys_dst, sys_policy, sys_nlg, 'sys')

或者不用上述集成的方式,而是直接使用端到端模型:

1
2
# sys_agent = Sequicity()
sys_agent = Damd()

用上面配置系统agent的方式类似地配置一个user agent:

1
2
3
4
5
6
7
8
9
user_nlu = BERTNLU()
# user_nlu = MILU()
# user_nlu = SVMNLU()
user_dst = None
user_policy = RulePolicy(character='usr')
# user_policy = UserPolicyVHUS()
user_nlg = TemplateNLG(is_user=True)
# user_nlg = SCLSTM(is_user=True)
user_agent = PipelineAgent(user_nlu, user_dst, user_policy, user_nlg, name='user')

4 使用分析工具检测系统

ConvLab2提供了分析工具对模拟对话中常见的错误进行丰富的统计和总结。

1
2
3
4
5
6
from convlab2.util.analysis_tool.analyzer import analyzer
# if sys_nlu!=None, set use_nlu=True to collect more information
analyzer = Analyzer(user_agent=user_agent, dataset='multiwoz')

set_seed(20200131)
analyzer.comprehensive_analyze(sys_agent=sys_agent, model_name='sys_agent', total_dialog=100)

比较多个模型:

1
2
set_seed(20200131)
analyzer.compare_models(agent_list=[sys_agent1, sys_agent2], model_name=['sys_agent1', 'sys_agent2'], total_dialog=100)