<s id="7kgcm"><acronym id="7kgcm"></acronym></s>
      <button id="7kgcm"><acronym id="7kgcm"></acronym></button>
    1. <em id="7kgcm"><acronym id="7kgcm"><input id="7kgcm"></input></acronym></em>
      <button id="7kgcm"></button>
    2. 全国 [切换]
    3. 二维码
      晒展网

      晒展APP

      手机也能玩展会,信息同步6大终端平台!

      微信小程序

      微信公众号

      当前位置: 首页 » 行业新闻 » 热点新闻 » 正文

      Abstract Meaning Representation (AMR) Annotation Release 3.0数据集、AMR数据集、LDC2020T02

      删稿    放大字体  缩小字体 发布日期:2023-09-06 21:06:32   浏览次数:1  发布人:a11d****  IP:117.173.23.***  评论:0
      导读

      Abstract Meaning Representation (AMR) Annotation Release 3.0Author(s):Kevin Knight, Bianca Badarau, Laura Baranescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha

      Abstract Meaning Representation (AMR) Annotation Release 3.0

      Author(s):Kevin Knight, Bianca Badarau, Laura Baranescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, Tim O'Gorman, Nathan Schneider

      Introduction

      Abstract Meaning Representation (AMR) Annotation Release 3.0 was developed by the Linguistic Data Consortium (LDC),?SDL/Language Weaver, Inc., the University of Colorado's?Computational Language and Educational Research?group and the?Information Sciences Institute?at the University of Southern California. It contains a sembank (semantic treebank) of over 59,255 English natural language sentences from broadcast conversations, newswire, weblogs, web discussion forums, fiction and web text. This release adds new data to, and updates material contained in, Abstract Meaning Representation 2.0 (LDC2017T10), specifically: more annotations on new and prior data, new or improved PropBank-style frames, enhanced quality control, and multi-sentence annotations.

      AMR captures "who is doing what to whom" in a sentence. Each sentence is paired with a graph that represents its whole-sentence meaning in a tree-structure. AMR utilizes PropBank frames, non-core semantic roles, within-sentence coreference, named entity annotation, modality, negation, questions, quantities, and so on to represent the semantic structure of a sentence largely independent of its syntax.

      LDC also released Abstract Meaning Representation (AMR) Annotation Release 1.0 (LDC2014T12), and Abstract Meaning Representation (AMR) Annotation Release 2.0 (LDC2017T10).

      Data

      The source data includes discussion forums collected for the DARPA BOLT AND DEFT programs, transcripts and English translations of Mandarin Chinese broadcast news programming from China Central TV, Wall Street Journal text, translated Xinhua news texts, various newswire data from NIST OpenMT evaluations and weblog data used in the DARPA GALE program. New source data to AMR 3.0 includes sentences from?Aesop's Fables, parallel text and the situation frame data set developed by LDC for the DARPA LORELEI program, and lead sentences from Wikipedia articles about named entities.

      The following table summarizes the number of training, dev, and test AMRs for each dataset in the release.

      Totals are also provided by partition and dataset:

      We gratefully acknowledge the support of the National Science Foundation Grant NSF: 0910992 IIS:RI: Large: Collaborative Research: Richer Representations for Machine Translation and the support of Darpa BOLT - HR0011-11-C-0145 and DEFT - FA-8750-13-2-0045 via a subcontract from LDC. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, DARPA or the US government.

      From Information Sciences Institute (ISI)

      Thanks to NSF (IIS-0908532) for funding the initial design of AMR, and to DARPA MRP (FA-8750-09-C-0179) for supporting a group to construct consensus annotations and the AMR Editor. The initial AMR bank was built under DARPA DEFT FA-8750-13-2-0045 (PI: Stephanie Strassel; co-PIs: Kevin Knight, Daniel Marcu, and Martha Palmer) and DARPA BOLT HR0011-12-C-0014 (PI: Kevin Knight).

      From Linguistic Data Consortium (LDC)

      This material is based on research sponsored by Air Force Research Laboratory and Defense Advance Research Projects Agency under agreement number FA8750-13-2-0045. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory and Defense Advanced Research Projects Agency or the U.S. Government.

      We gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0184 Subcontract 4400165821. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government.

      From Language Weaver (SDL)

      This work was partially sponsored by DARPA contract HR0011-11-C-0150 to LanguageWeaver Inc. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA or the US government.

       
      (文/匿名(若涉版权问题请联系我们核实发布者))
      打赏
      免责声明
      本文为昵称为 a11d**** 发布的作品,本文仅代表发布者个人观点,本站未对其内容进行核实,请读者仅做参考,如若文中涉及有违公德、触犯法律的内容,一经发现,立即删除,发布者需自行承担相应责任。涉及到版权或其他问题,请及时联系我们154208694@qq.com删除,我们积极做(权利人与发布者之间的调停者)中立处理。郑重说明:不 违规举报 视为放弃权利,本站不承担任何责任!
      有个别老鼠屎以营利为目的遇到侵权情况但不联系本站或自己发布违规信息然后直接向本站索取高额赔偿等情况,本站一概以诈骗报警处理,曾经有1例诈骗分子已经绳之以法,本站本着公平公正的原则,若遇 违规举报 我们100%在3个工作日内处理!
      0相关评论
       

      (c)2008-2022 展会信息发布,找展会,请上晒展网All Rights Reserved.

      亚洲中文字幕在线-最新中文字幕A片专区-最新亚洲中文字幕一区在线 中文又粗又大又硬毛片免费看 亚洲欧美日韩综合 国产噜噜噜噜久久久久久久久 Xx性欧美肥妇精品久久久久久 中国婬乱a一级毛片多女 欧美一级欧美三级在线观看 免费一级毛片在线播放放视频 亚洲欧美一区二区三区久久