NLP for Work
Канал
@nlpfw
Поделиться
Продвигать
1,24 тыс.
подписчиков
Natural Language Processing for Work 频道历史内容的网页归档:
https://www.notion.so/NLP-for-Work-af812710c3a543c2adc7acbdb3990036
For Work 系列频道 梗频道:
@JISFW
图频道:
@GfWR16
反馈投稿吹水群:
@FishingFW
更多精彩:
https://t.center/JISFW/13392
Love Center - Dating, Friends & Matches, NY, LA, Dubai, Global
Бот для знакомств
Запустить
NLP for Work
https://thegradientpub.substack.com/p/ted-gibson-language-structure-communication-llms
才发现有transcripts了
Substack
Ted Gibson: The Structure and Purpose of Language
On why language looks the way it does, how humans process and use language, and a perspective on LLMs from linguistics.
NLP for Work
Binding Language Models in Symbolic Languages
https://lm-code-binder.github.io/
ICLR23 top-25% 终于也开始实际地评估LLM转换形式语言的效果了
NLP for Work
https://doi.org/10.1146/annurev-linguistics-030521-044439
www.annualreviews.org
Compositionality in Computational Linguistics | Annual Reviews
Neural models greatly outperform grammar-based models across many tasks in modern computational linguistics. This raises the question of whether linguistic principles, such as the Principle of Compositionality, still have value as modeling tools. We review…
NLP for Work
https://www.degruyter.com/document/doi/10.1515/lingvan-2022-0133/
De Gruyter
Measuring language complexity: challenges and opportunities
This special issue focuses on measuring language complexity. The contributions address methodological challenges, discuss implications for theoretical research, and use complexity measurements for testing theoretical claims. In this introductory article,…
NLP for Work
https://t.co/P1s4wiWCsR
Google Docs
Scaling, emergence, and reasoning (Jason Wei, NYU)
Scaling, emergence, and reasoning in large language models Jason Wei Twitter: @_jasonwei Mistakes & opinions my own, and not of my employer.
NLP for Work
https://github.com/labmlai/annotated_deep_learning_paper_implementations
好生炫酷(
GitHub
GitHub - labmlai/annotated_deep_learning_paper_implementations:
🧑🏫
60+ Implementations/tutorials of deep learning papers with…
🧑🏫
60+ Implementations/tutorials of deep learning papers with side-by-side notes
📝
; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
NLP for Work
NLP for Work
https://yimeixiang.wordpress.com/teaching/ 一点逻辑语义学课程材料。没有太争议的主题,如语义的本体论问题、语义语用界面等,偏具体的语义学技术,对入门友好。
http://eecoppock.info/semantics-boot-camp.pdf
NLP for Work
https://yimeixiang.wordpress.com/teaching/
一点逻辑语义学课程材料。没有太争议的主题,如语义的本体论问题、语义语用界面等,偏具体的语义学技术,对入门友好。
Yimei Xiang ∙ 向伊梅
Teaching
Semantics II (Graduate, Rutgers, Spring 2021) Unit 1: Compositionality and Binding Part I: Compositionality, Heim & Kratzer theory of pronoun binding Readings: Heim & Kratzer (1998: Chapter…
NLP for Work
NLP for Work
https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi/view Do large language models need sensory grounding for meaning and understanding? Spoiler: YES! Yann LeCun Courant Institute & Center for Data Science, NYU Meta – Fundamental AI Research…
https://vxtwitter.com/sedielem/status/1640145696245309440
vxTwitter / fixvx
💖
90
🔁
11
Sander Dieleman (@sedielem)
Some thoughts on non-AR language models, and what it might take to dethrone autoregression: https://sander.ai/2023/01/09/diffusion-language.html
NLP for Work
https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi/view
Do large language models need sensory grounding for meaning and understanding?
Spoiler: YES!
Yann LeCun
Courant Institute & Center for Data Science, NYU
Meta – Fundamental AI Research
2023-03-24
NLP for Work
https://zhuanlan.zhihu.com/p/597586623
知乎专栏
通向AGI之路:大型语言模型(LLM)技术精要
ChatGPT出现后惊喜或惊醒了很多人。惊喜是因为没想到大型语言模型(LLM,Large Language Model)效果能好成这样;惊醒是顿悟到我们对LLM的认知及发展理念,距离世界最先进的想法,差得有点远。我属于既惊喜又惊醒的…
NLP for Work
https://windowsontheory.org/2022/06/20/the-uneasy-relationship-between-deep-learning-and-classical-statistics/
Windows On Theory
The uneasy relationship between deep learning and (classical) statistics
An often-expressed sentiment is that deep learning (and machine learning in general) is “simply statistics,” in the sense that it uses different words to describe the same concepts statisticians ha…
NLP for Work
https://direct.mit.edu/books/oa-monograph/5329/The-Art-of-Abduction
NLP for Work
https://www.amacad.org/publication/human-language-understanding-reasoning
American Academy of Arts & Sciences
Human Language Understanding & Reasoning
The last decade has yielded dramatic and quite surprising breakthroughs in natural language processing through the use of simple artificial neural network computations, replicated on a very large scale and trained over exceedingly large amounts of data. The…
NLP for Work
https://thegradient.pub/graph-neural-networks-beyond-message-passing-and-weisfeiler-lehman/
The Gradient
Beyond Message Passing: a Physics-Inspired Paradigm for Graph Neural Networks
On going beyond message-passing based graph neural networks with physics-inspired “continuous” learning models
NLP for Work
https://aaai2022kgreasoning.github.io/
AAAI 2022 Tutorial: Reasoning on Knowledge Graphs: Symbolic or Neural?
一个不错的综述,尽管气氛上这里的“推理”还是计算意义上的,跟认知或心智方面的进路远非字面上那么接近。
NLP for Work
https://aclanthology.org/2021.acl-long.379/
https://arxiv.org/abs/2203.00281
给大佬们跪了……比大多数老板都老的CYK算法居然被降到O(n)了
ACL Anthology
R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling
Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long…
NLP for Work
https://stat.columbia.edu/~cunningham/teaching/GR8201/
NLP for Work
Forwarded from
Data Science Archive
(
小熊猫
)
AlibiExplain 应该是这几年看到的在机器学习模型可解释性上做的最系统的工具,堪称知识库型文档,毕竟不能只了解一点 SHAP。
https://docs.seldon.io/projects/alibi/en/latest/index.html
NLP for Work
Graph Neural Networks
Foundations, Frontiers, and Applications
Lingfei Wu,
JD.COM
Peng Cui, Tsinghua University
Jian Pei, Simon Fraser University
Liang Zhao, Emory University
https://graph-neural-networks.github.io/index.html