Artwork

Вміст надано Machine Learning Street Talk (MLST). Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Machine Learning Street Talk (MLST) або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

24:58
 
Поширити
 

Manage episode 353766724 series 2803422
Вміст надано Machine Learning Street Talk (MLST). Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Machine Learning Street Talk (MLST) або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines.

This paper focuses on using a controlled stimulus space of two-dimensional binary grids to define the space of abstract concepts that humans have and a feedback loop of collaboration between humans and machines to understand the differences in human and machine inductive biases.

It is important to make machines more human-like to collaborate with them and understand their behavior. Synthesised discrete programs running on a turing machine computational model instead of a neural network substrate offers promise for the future of artificial intelligence. Neural networks and program induction should both be explored to get a well-rounded view of intelligence which works in multiple domains, computational substrates and which can acquire a diverse set of capabilities.

Natural language understanding in models can also be improved by instilling human language biases and programs into AI models. Sreejan used an experimental framework consisting of two dual task distributions, one generated from human priors and one from machine priors, to understand the differences in human and machine inductive biases. Furthermore, he demonstrated that compressive abstractions can be used to capture the essential structure of the environment for more human-like behavior. This means that emergent language-based inductive priors can be distilled into artificial neural networks, and AI models can be aligned to the us, world and indeed, our values.

Humans possess strong inductive biases which enable them to quickly learn to perform various tasks. This is in contrast to neural networks, which lack the same inductive biases and struggle to learn them empirically from observational data, thus, they have difficulty generalizing to novel environments due to their lack of prior knowledge.

Sreejan's results showed that when guided with representations from language and programs, the meta-learning agent not only improved performance on task distributions humans are adept at, but also decreased performa on control task distributions where humans perform poorly. This indicates that the abstraction supported by these representations, in the substrate of language or indeed, a program, is key in the development of aligned artificial agents with human-like generalization, capabilities, aligned values and behaviour.

References

Using natural language and program abstractions to instill human inductive biases in machines [Kumar et al/NEURIPS]

https://openreview.net/pdf?id=buXZ7nIqiwE

Core Knowledge [Elizabeth S. Spelke / Harvard]

https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf

The Debate Over Understanding in AI's Large Language Models [Melanie Mitchell]

https://arxiv.org/abs/2210.13966

On the Measure of Intelligence [Francois Chollet]

https://arxiv.org/abs/1911.01547

ARC challenge [Chollet]

https://github.com/fchollet/ARC

  continue reading

149 епізодів

Artwork
iconПоширити
 
Manage episode 353766724 series 2803422
Вміст надано Machine Learning Street Talk (MLST). Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією Machine Learning Street Talk (MLST) або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.

Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines.

This paper focuses on using a controlled stimulus space of two-dimensional binary grids to define the space of abstract concepts that humans have and a feedback loop of collaboration between humans and machines to understand the differences in human and machine inductive biases.

It is important to make machines more human-like to collaborate with them and understand their behavior. Synthesised discrete programs running on a turing machine computational model instead of a neural network substrate offers promise for the future of artificial intelligence. Neural networks and program induction should both be explored to get a well-rounded view of intelligence which works in multiple domains, computational substrates and which can acquire a diverse set of capabilities.

Natural language understanding in models can also be improved by instilling human language biases and programs into AI models. Sreejan used an experimental framework consisting of two dual task distributions, one generated from human priors and one from machine priors, to understand the differences in human and machine inductive biases. Furthermore, he demonstrated that compressive abstractions can be used to capture the essential structure of the environment for more human-like behavior. This means that emergent language-based inductive priors can be distilled into artificial neural networks, and AI models can be aligned to the us, world and indeed, our values.

Humans possess strong inductive biases which enable them to quickly learn to perform various tasks. This is in contrast to neural networks, which lack the same inductive biases and struggle to learn them empirically from observational data, thus, they have difficulty generalizing to novel environments due to their lack of prior knowledge.

Sreejan's results showed that when guided with representations from language and programs, the meta-learning agent not only improved performance on task distributions humans are adept at, but also decreased performa on control task distributions where humans perform poorly. This indicates that the abstraction supported by these representations, in the substrate of language or indeed, a program, is key in the development of aligned artificial agents with human-like generalization, capabilities, aligned values and behaviour.

References

Using natural language and program abstractions to instill human inductive biases in machines [Kumar et al/NEURIPS]

https://openreview.net/pdf?id=buXZ7nIqiwE

Core Knowledge [Elizabeth S. Spelke / Harvard]

https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf

The Debate Over Understanding in AI's Large Language Models [Melanie Mitchell]

https://arxiv.org/abs/2210.13966

On the Measure of Intelligence [Francois Chollet]

https://arxiv.org/abs/1911.01547

ARC challenge [Chollet]

https://github.com/fchollet/ARC

  continue reading

149 епізодів

모든 에피소드

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник