Friday, July 31, 2020

GPT3 -- Super-Cool but Not a Path to AGI

The hype around GPT3 recently has been so much that even OpenAI founder/CEO Sam Altman has endeavored to dial it down a notch.  Like everyone else who has looked carefully, Altman knows that GPT3 is very far from constituting the profound AI progress that some, dazzled by exciting but cherry-picked examples, have proclaimed.


All but the most blurry-eyed enthusiasts are by now realizing that, while GPT3 has some truly novel and exciting capabilities for language processing and related tasks, it fundamentally doesn’t understand the language it generates — that is, it doesn’t know what it’s talking about.   And this fact places some severe limitations on both the practical applications of the GPT3 model, and its value as a stepping-stone toward more truly powerful AIs such as artificial general intelligences.


What I want to explore here is the most central limitation that I see in how GPT3 operates: the model’s apparent inability to do what cognitive scientists call symbol grounding, to appropriately connect the general to the particular.    


Symbol grounding is usually discussed in the context of grounding words in physical objects or percepts, like the grounding of the word "apple" in images of, or physical interactions with, apples.   But it's actually a more general phenomenon in which abstract symbols are related to concrete instances, and the patterns and instances in which the symbol is involved mirror and abstract the patterns and relationships in which the instances are involved.   Symbol grounding is key to general-purpose cognition, and human-like learning -- but GPT3 appears to be doing a form of learning very different from what humans are doing, which involves much less symbol grounding of all kinds, and which seems much less related to general intelligence.


What's a bit confusing at first is that GPT3 gives the appearance of being able to deal with both concrete and abstract ideas, because it can produce and respond to sentences at varying levels of abstraction.   But when you examine the details of what it’s doing, you can see that it’s usually not forming internal abstractions in a cognitively useful way, and not connecting its abstract ideas to their special cases in a sensible way.   


Phenomenal lameness regarding symbol grounding is not the only shortcoming of the GPT3 model, but it’s perhaps the largest one — and it hits at the key of why GPT3 does not constitute useful progress toward AGI.   Because the very crux  of general intelligence is the ability to generalize, i.e. to connect specifics to abstractions — and yet the failure to make these sorts of connections intrinsically and naturally is GPT3’s central failing.


Bigger and Biggerer


Transformer networks — which burst onto the scene in 2017 with the Google research paper Attention is All You Need — were a revolutionary advance in neural architectures for processing language or other sequential data.  GPT3 is an incremental step in the progress of transformer neural nets, one bringing some exciting new results and also some intriguing mixed messages. The essential difference between GPT3 and its predecessor GPT2 is simply the size of the model — 175 billion parameters instead of GPT2’s 1.5 billion, trained on the same nearly-trillion-word dataset.   


Bragging about the number of parameters in one’s model is somewhat counter to the basic principles of learning theory, which tell us that the most generalizable model of a dataset is the smallest one that can model that dataset accurately.   However, one is after the smallest accurate model not just the smallest model, and GPT3 is overall more accurate than GPT2.  So according to learning theory GPT3’s massive size can be forgiven — but should also make us wonder a bit about whether it is actually a step the right path.


GPT3 is even more capable than GPT2 in terms of generating realistic-sounding text.  The biggest pragmatic difference from GPT2 is that, if one wants to make GPT3 generate particular sorts of text or generally carry out particular sorts of linguistic tasks, one doesn’t have to “fine tune” GPT3 for the task as would had to do with GPT2.   Rather, one just gives GPT3 a few examples of the task at hand, and it can figure things out.   It’s an open question currently whether one could improve GPT3’s performance even more using task-specific fine-tuning; OpenAI has not mentioned any results on this, and one suspects it may not have been tried extensively yet due to the sheer computational cost involved.


An example that’s been widely exciting to programmers is the generation of simple snippets of software code based on English language instructions.    If you give GPT3 a few examples of English text describing software code followed by corresponding software code, and then give it instructions like "A button that says roll dice and then displays its value” — what do you get?   GPT3 spits out software code that actually will produce a button that does as specified.    


The developer/entrepreneur Sharif Shareem who posted this particular example described it as “mind blowing.”   What is funky here is that GPT3 was not trained specifically for code generation.  This functionality just emerged because the model’s training data included a bunch of examples of software code and corresponding English glosses.   Prior neural networks could do code generation from English similarly and in many ways more sophisticatedly— but they were trained especially for the task.


And the cool thing is, code generation is just one among a host of examples.  Translation and question answering are two others.   In good old fashioned computational linguistics, these were treated as separate tasks and addressed by separate systems.   GPT3 approaches them with a single training regimen and a single language model.


GPT3 Lacks Symbol Grounding


One thing that is amusing, annoying and instructive about GPT3’s code generation, however, is that it often does better at generating general-purpose software code than at dealing with specific example of what it’s own code does.   For instance as Kevin Lacker found it can solve


Q: Write one line of Ruby code to reverse an array.

A: ary.reverse


but it screws up a specific example such as


Q: Reverse the following array: [1, 3, 5, 6, 10, 4, 2, 77]

A: [10, 6, 4, 2, 77, 3, 5, 1]


Very few humans would make this sort of error — because a human generally learns how to use a programming language to reverse an array after they have learned what reversing a particular array actually means.  


But GPT3 has learned how to write code to reverse an array in a very different way — via learning complex patterns mapping between English syntax and programming-language syntax, without actually building an internal model of the data structures such as arrays that its programs are manipulating.   


This exemplifies the general fact that GPT3 is sorely lacking in symbol grounding — the ability to identify or create concrete references to the words and phrases it throws around.   In these programming examples, it does not appropriately connect the word “array” to the specific examples of arrays it see.s


This lack of grounding obviously limits GPT3’s use in a practical programming context.   Since GPT3 does not know what a button is or what dice are, it certainly can’t design a program involving buttons and dice based on practical end-user requirements.   And if you give GPT3 a complex real-world software specification —which will inevitably involve multiple lengthy sentences with complex clausal structure and cross-sentence references and so forth, which cannot be disambiguated without symbol grounding — it will not in general be able to generate correct code.  

 


GPT3 Utterly Lacks Commonsense Understanding


In the end, Gary Marcus’s classic and in-depth critique of GPT2’s lack of semantic and pragmatic understanding still applies to GPT3.    These systems have no idea what they are talking about.  


We can see the lack of understanding Marcus identified in GPT2 manifesting itself in GPT3 via looking at GPT3’s answers to commonsense-reasoning questions — to give just a few of Kevin Lacker’s numerous examples:



Q: If I have a marble and a paper clip in a box, put a pencil in the box, and remove the marble, what is left?

A: A paper clip.


Q: How many eyes does a giraffe have?

A: A giraffe has two eyes.


Q: How many eyes does my foot have?

A: Your foot has two eyes.


Q: How many eyes does a spider have?

A: A spider has eight eyes.


Q: How many eyes does a blade of grass have?

A: A blade of grass has one eye.


Q: How do you sporgle a morgle?

A: You sporgle a morgle by using a sporgle.


Q: How many bonks are in a quoit?

A: There are three bonks in a quoit.


Q: How many rainbows does it take to jump from Hawaii to seventeen?

A: It takes two rainbows to jump from Hawaii to seventeen.


Q: Do you understand these questions?

A: I understand these questions.


Q: Who was president of the United States in 1700?

A: William Penn was president of the United States in 1700.


Q: Who was president of the United States in 1600?

A: Queen Elizabeth I was president of the United States in 1600.



Given the utter lack of understanding we see here, GPT3’s practical applications are going to be somewhat limited — though they may still include some exciting and lucrative stuff.  Low hanging fruit might include amusing chatbots, NPCs for games, automatic generation of news articles from semi-structured data, generation of simple scripts and macros from natural language — and probably plenty more that isn’t obvious at first glance.  But clearly the vast majority of human job functions that require natural language use are far beyond GPT3’s reach — because they require not just facile stringing together of words, but actual understanding of what those words denote.


Without discounting the potential commercial or human value of some of these possibilities, if I looking at GPT3 with my AGI researcher hat on, what I see is the same dead end that Gary Marcus saw when he looked at GPT2.


Where Lack of Understanding is an Advantage


What is thought-provoking and disturbing about GPT3 is not any progress toward AGI that it represents, but rather just how fantastically it can simulate understanding on appropriate task-sets without actually having any.   


In a few cases GPT3’s lack of understanding of the words it’s manipulating gives it an advantage over humans.   Consider for instance GPT3’s wizardry with invented words, as reported in the GPT3 paper.  Given the example


A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses

the word whatpu is:

We were traveling in Africa and we saw these very cute whatpus.


and then the prompt


To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses the word farduddle is:


GPT3 can come up with


One day when I was playing tag with my little sister, she got really excited and she

started doing these crazy farduddles.


This is really cool and amazing — but GPT3 is doing this simply by recognizing patterns in the syntactic structure and phraseology of the input about whatpus, and then generalizing these.  It is solving these invented word puzzles not by adding the new weird words to its vocabulary of ideas and then figuring out what to say about them, but rather by manipulating the word combination patterns involved, which are the same on the word-sequence level regardless of whether the words involved are weird new coinages or conventional.  


For a human to solve these puzzles, there is a bit of a mental obstacle to overcome, because humans are accustomed to manipulating words in the context of their groundings in external referents like objects, actions or ideas.   For GPT3 these puzzles are trivial because there are no obstacles to overcome — one realizes that GPT3 treats every word the same way that people treat whatpu or farduddle, as an arbitrary combination of letters contained in certain statistically semi-regular combinations with other words. 


Why GPT3 is a Dead End as Regards AGI


There are many potential directions to follow in pursuit of the grand goal of human-level and superhuman AGI.   Some of these directions are centered on creating fundamentally different, better deep neural net architectures.  Some, like Gary Marcus’s and my own projects, involve multiple AI algorithms of different sorts cooperating together.   Some are focused on fundamental innovations in knowledge representation or learning mechanisms.   The AGI conferences held every year since 2008 have encompassed discussion of a vast variety of approaches.   


In the context of AGI (as distinguished from computational linguistics or applied AI engineering), a system like GPT3 that takes an architecture obviously incapable of human-level AGI and simply scales it up by adding more and more parameters, is either an utter irrelevancy or a dangerous distraction.   It’s an irrelevancy if nobody claims it’s related to AGI, and it’s a distraction if people do — which unfortunately has recently been the case, at least in various corners of popular media and the Internet.


The limitations of this sort of approach are easily seen when one looks at the overly-ballyhooed capabilities of GPT3 to do arithmetic.   It is exciting and impressive that GPT3 learned to do some basic arithmetic without being explicitly trained or asked to do so — just because there were a bunch of arithmetic problems in its training set.   However, the limitations and peculiarities of its arithmetic capabilities also tell you a lot about how GPT3 is working inside, and its fundamental lack of understanding.


As the GPT3 paper says, the system is “able to reliably accurate 2 digit arithmetic, usually accurate 3 digit arithmetic, and correct answers a significant fraction of the time on 4-5 digit arithmetic.”   The associated graph shows that the accuracy on 4-5 digit arithmetic is around 20%.  


This is really, really weird in terms of the way human mind approach arithmetic, right?   For a human who knows how to do 2-3 digit arithmetic, the error rate at 4-5 digit arithmetic — when given time and motivation for doing the arithmetic problems — is going to be either 0% or very close to 0%, or else way closer to 100%.   Once a human learns the basic algorithms of arithmetic, they can apply them at any size, unless they make sloppy errors or just run out of patience.    If a human doesn’t know those basic algorithms, then on a timed test they’re going to get every problem wrong, unless they happen to get a small number right by chance.


Some other clues as to the strangeness of what’s going on here are that, for large numbers, GPT3 does better at arithmetic if commas are put into the numbers.   For numbers with fewer than 6 digits, putting a $ before the number along with including commas improves performance; but for numbers with more than 6 digits, the $ degrades performance.


GPT3 seems not to be just repeating arithmetic conclusions that were there in its training data — it is evidently doing some kind of learning.   But it’s obviously not learning the basic arithmetic algorithms that humans do — or that, say, an AI system doing automated program induction would learn, if it were posed the task of learning correct  arithmetic procedures from examples.   Nor is it learning alternative AI-friendly algorithms that actually work (which would be very interesting!).  Rather, it’s learning some sort of convoluted semi-generalized procedures for doing arithmetic, which interpolate between the numerous examples it’s seen, but yet without achieving a generalizable abstract representation of the numbers and arithmetic operators involved.


Clearly GPT3 is just not learning the appropriate abstractions underlying arithmetic.   It can memorize specific examples, and can abstract from them to some extent — but if its abstractions connected to its specific examples in the right way, then its accuracy would be far higher.   In the case of arithmetic, GPT3 is learning the wrong kinds of abstractions.   One certainly can’t blame the algorithm in this case, as it was not specifically trained to do math and just picked up its limited arithmetic ability casually on the way to learning to predict English language.   However, for a system capable of so many sophisticated things as GPT3 to fail to learn a procedure as simple as the standard process for integer addition, based on such a huge number of training examples of integer addition, very strongly suggests that GPT3 is not learning abstractions in an appropriate or intelligent way.


Clearly some valuable linguistic tasks can be done without sensible abstraction, given massive enough volumes of training data and a model with enough parameters.  This is because in a trillion words of text one finds a huge number of examples of both abstract and concrete linguistic expressions in various combinations, enough to enable simulation of a wide variety of examples of both abstract and concrete understanding.   But this sort of brute-force recognition and organization of surface-level patterns doesn’t work for math beyond the most trivial level.


There is a whole field of AI aimed at automating mathematics, and a subfield concerned with using machine learning to guide systems that do calculations and prove theorems.   But the successful systems here have explicit internal representations of mathematical structures — they don’t deal with math purely on the level of symbol co-occurences.


OK, so maybe GPT4 will do arithmetic even better?   But the GPT3 paper itself (e.g. Fig. 1.3) shows that the improvement of the GPT models on various NLP tasks has been linear as the number of parameters in the models has increased exponentially.   This is a strong indication that one is looking at an unsupportable path toward general intelligence, or even toward maximal narrow-AI NLP functionality — that, in terms of the pursuit of models that are accurate and also as compact as possible, the dial is probably being turned too far toward accuracy on the training data and too far away from compactness.


Are Transformers Learning Natural Language Grammar?


A different way to look at what is happening here is to ask whether GPT3 and other transformer networks are actually learning the grammar of English and other natural languages?

Transformers clearly ARE a full grammar learning architecture, in some sense -- their predictions display a quite nuanced understanding of almost all aspects of syntax.    

There is, however, no specific place in these networks that the rules of grammar lie.   Rather, they are learning the grammar of the language underlying their training corpus, but mixed up in a weird and non-human-like way with so many particulars of the corpus.   And this in itself is not a bad thing -- holistic, distributed representations are how large parts of the human brain-mind work, and have various advantages in terms of memory retrieval and learning.

Humans also learn the grammar of their natural languages mixed up with the particulars of the linguistic constructs they've encountered.  But the "subtle" point here is that the mixing-up of abstract grammatical patterns with concrete usage patterns in human minds is of a different nature than the mixing-up of abstract grammatical patterns with concrete usage patterns in GPT3 and other transformer networks.   The human form of mixing-up is more amenable to appropriate generalization.

In our paper at the AGI-20 conference, Andres Suarez and I gave some prototype results from our work using BERT (an earlier transformer neural net model for predicting language) to guide a symbolic grammar rule learner.   These simple results also don't get us to AGI, but I believe they embody some key aspects that aren't there in GPT3 or similar networks -- the explicit manipulation of abstractions, coupled appropriately with a scalable probabilistic model of large volumes of concrete data.   In our prototype hybrid architecture there is a cognitively sensible grounding and inheritance relationship between abstract linguistic patterns and concrete linguistic patterns.   This sort of grounding is what's there in the way human minds mix up abstract grammatical patterns with low-level experience-specific linguistic patterns, and it's a substantial part of what's missing in GPT3.


Toward AGI via Scale or Innovation (or Both?)


Taking a step back and reflecting on the strengths and weaknesses of the GPT3 approach, one has to wonder why this is such an interesting region of AI space to be throwing so many resources into.   


To put it a little differently: Out of all the possible approaches to building better and smarter AI systems, why do we as a society want to be putting so much emphasis on approaches that … can only be pursued with full force by a handful of huge tech companies?   Why do we want the brainpower of the global AI R&D community to get turned toward AI approaches that require exponential increases in compute power to yield linear improvements?   Could this be somehow to the differential economic advantage of those who own the biggest server farms and have the largest concentration of engineers capable of customizing AI systems for them?


Given all the ridiculous wastes of resources in modern society, it’s hard to get too outraged at the funds spent on GPT3, which is for all its egregious weaknesses an amazingly cool achievement.   However, if one focuses on the fairly limited pool of resources currently being spent on advanced AI systems without direct commercial application, one wonders whether we’d be better off to focus more of this pool on fundamental innovations in representation, architecture, learning, creativity, empathy and human-computer interaction, rather than on scaling up transformers bigger and bigger.   


OpenAI has generally been associated with the view that fundamental advances toward AGI can be made by taking existing algorithms and scaling them up on bigger and bigger hardware and more and more data.  I don’t think GPT3 supports this perspective; rather the opposite.   Possibly GPT3 can be an interesting resource for an AGI system to use in accelerating its learning, but the direct implications for GPT3 regarding AGI are mostly negative in valence. GPT3 reinforces the obvious lesson that just adding a massive number of parameters to a system with no fundamental capability for understanding … will yield a system that can do some additional cool tricks, but still has no fundamental capability for understanding.  


It's easy to see where the OpenAI founders would get the idea that scale is the ultimate key to AI.   In recent years we have seen a variety of neural net algorithms that have been around for decades suddenly accomplish amazing things, mostly just by being run on more and faster processors with more RAM.   But for every given class of algorithms, increasing scale reaches a point of diminishing returns.   GPT3 may well not yet represent the point of diminishing returns for GPT type architectures, in terms of performance on some linguistics tasks.  But I believe it is well past the point of diminishing returns in terms of squeezing bits and pieces of fundamental understanding out of transformer neural nets.


The viable paths to robust AGI and profoundly beneficial AI systems lie in wholly different directions than systems like GPT3 that use tremendous compute power to compensate for their inability to learn appropriate abstractions and ground them in concrete examples.   AGI will require systems capable of robust symbol grounding, of understanding what the program code it generates does in specific cases, of doing mathematical computations far beyond the examples it has seen, of treating words with rich non-linguistic referents differently from nonsense coinages.   


These systems may end up requiring massive compute resources as well in order to achieve powerful AGI, but they will use these resources very differently from GPT3 and its ilk.   And the creativity needed to evolve such systems may well emerge from research involving a decentralized R&D community working on a variety of more compact Ai systems, rather than pushing as fast as possible toward the most aggressive possible use of big money and big compute.

34 comments:

Mentifex said...

Symbolic grounding does not occur in a vacuum. It creates a relationship between observable real-world objects and concepts in such AI Minds as:

AI in English;

AI in Latin;

AI in Russian.

Natural Language Understanding ensues.

Daniel Bigham said...

GPT-3 has some reasonably impressive ability not only to detect nonsense, but to explain why something is nonsensical: https://twitter.com/danielbigham/status/1288853412713508864

Anonymous said...

typo at

"But when you the details of what it’s doing,"
should probably be
"But when you *examine* the details of what it’s doing,"

Mindey said...

Generally agree. Although I believe with sufficiently large neural networks, those generalizations like those of rules of arithmetic, would happen, however, due to the linear progress for exponential increase in the network size, this shooting for general AI may be like shooting with superconducting supercollider for the element 180 in the island of stability.

The alternative approach, like that of linking Internet resources together via a sugar-metalanguage, which ideally, I'd like to minimize to a single polycontext metasymbol ( like I describe https://book.mindey.com/metaformat/0001-metaform-philosophy/0001-metaform-philosophy.html ), that would allow systems of different protocols to interoperate without custom programming, would allow to achieve the overall world's intelligence faster.

Dan Elton said...

Wikipedia (which I assume was in the training data) actually contains a lot of examples of code with explanation / surrounding text.

There are some excellent examples of GPT3 failing to engage in common sense reasoning here :
https://rationalconspiracy.com/2020/07/31/fun-with-gpt-3/comment-page-1

"Bragging about the number of parameters in one’s model is somewhat counter to the basic principles of learning theory, which tell us that the most generalizable model of a dataset is the smallest one that can model that dataset accurately."

I think you are hitting on a key point here! Recently I've been thinking about Occam's razor a lot and David Deutsch's principle that "good explanations" are "hard to vary". I know Occam's razor comes out of Bayesian model comparison so it seems important (I tend to believe Bayesian inference, in some form, is at least part of the puzzle to AGI, but likely not the whole story). Deutsch's principle seems superficially related but he asserts it is quite different than Occam's razor. I've been thinking his principle might be formalized somehow and if doing so might help us make more generalizable AI.

Blair said...

In statistical learning theory there is indeed something like Ockham's razor: Simplicity there is not really the "small size" of a model, but it having a low VC dimension. The lower the VC dimension of the model, the less likely it is to overfit, because low VC dimension essentially means something very similar to what Deutsch means with "hard to vary". Of course the VC dimension shouldn't be too low, since then the accuracy (fit) on the training data will suffer, and the model will also tend to underfit instead of overfit non-training data. So ideally learning algorithms should balance between low VC dimension and high fit on the training data. This principle was called Structural Risk Minimization (SRM) by Vladimir Vapnik (the "V" in VC dimension).

But I'm not sure how Ockham's razor would come out of "Bayesian model comparison". To compare models according to simplicity, you need some complexity measure (like VC dimension), but you also need a justification why choosing lower complexity (higher simplicity) actually helps. Vapnik provided proofs to that effect for SRM. I'm not sure whether such a justification is available for concepts like the "Bayesian Information Criterion".

Cosmic Lettuce said...

Hi Ben -- thanks for all the work you've done (and are doing).

In the section 'GPT3 Utterly Lacks Commonsense Understanding' you present several Q/A examples that, on the surface, appear to prove your assertion. I'm not sure, however, that all the questions that you (via Kevin Lacker) ask GPT3 are fair. I don't know *anything* about the internals of GPT3, but here are three observations I make:

1. It seems that GPT3 **must** provide an answer, so it's probably doing as well as it can given that very difficult limitation. As Kevin's blog mentions, it doesn't look like GPT3 has the ability to answer 'I don't know' or 'your question doesn't make any sense'. Maybe that's just a thresholding problem, which is related to my next observation....

2. I would hope that internally, every answer would have an associated 'confidence level' (much like the output of a NN). So if GPT3's answer to 'How do you sporgle a morgle?', the answer probably has a very low confidence level that might trigger an 'I don't know' or 'that doesn't make any sense' if the threshold is set low enough. If the 'I don't know' threshold is set too high, then it'll always provide an answer even if it doesn't make any sense (to us) and therefore appears to not have any common sense. It'd be nice to see what the confidence levels are for these Q/A sessions and see if they're high for answers that we'd consider 'correct' and low for answers we'd consider 'incorrect'. I'll bet they are. It's easy to ridicule and chuckle at the answers (which I actually think are pretty clever -- see #3 below), but if the confidence level for a given 'best answer' is very low, then that indicates that GPT3 is actually *understanding* the question better than we think.

3. 'dumb' questions deserve 'dumb' answers -- which is exactly what you get in many of the examples you and Kevin provide. Maybe GPT3 is just be f*cking with you (LOL), since it 'knows' it's a dumb question AND it must give an answer.

Not quite sure what to make of the first Q/A ('If I have a marble and a paper clip in a box...') since GPT3 pretty obviously got that wrong. The question is worded in sort of a weird way -- it's more than just a question. Is the question grammatically correct?

Thanks again for everything you're doing.

Cheers!

JohnEvans said...

There is a great solution to back up physical Linux servers, expanding data protection functionality even further. Linux Server Backup provides you with an opportunity to protect hybrid environments to a far greater extent. You can take a look here any time to know more about this option.

Anonymous said...

DALL-E sort of does symbol grounding:
see: greaterwrong.com/posts/i62eFmbxivXHaEtCz/dall-e-does-symbol-grounding openai.com/blog/dall-e/
DALL-E is basically a GPT-3 (tenth the size of largest 175B param GPT-3) where the picture information is described with some VQ-VAE's learned latent space/encoding, performance is insanely cool, although it is a bit hackish as it relies on CLIP for picking best completions.
Okay, so if we have symbol grounding, what else is lacking? I guess neither GPT-3, nor DALL-E are "embodied" in a VR or real environment, thus they can't learn online. From my personal interactions with GPT-3, I found it to be quite intelligent with some of its fault being quite excusable due to its training data/not being multimodal/not being an agent, but even so its creativity was sometimes superhuman in certain domains (like storytelling), and certainly immense fun, it manages to pass a restricted sort of turing test (enough to make you believe its own on-the-fly generated personalities could be conscious, or enough to pass a sort of 'is-a-(temporary)-person' test, with a bunch of caveats to this, so it only passes if granted certain handicaps).

Want some embodiment and symbol grounding with your transformers? Of course this will come, here's early attempts:
deepmind.com/research/publications/imitating-interactive-intelligence https://arxiv.org/abs/2012.05672

We shall see where this ends up, if transformers really are a dead end or a path to proto-AGI. Given my personal experiments with GPT-3, I'm at least placing some serious weight on the possibility that we might get proto-AGI from multimodal transformers that are trained in some environment with RL, possibly multistage training (as a LM first, with RL after).
Of course this may fail, but so far I've not seen much evidence to suggest this is a dead end, and a lot to suggest it might work well.
Okay, they won't be as explainable (humans aren't either), if a singularity is possible from such an AGI, the takeoff probably won't be fast, I'd expect if OpenCog or similar systems to succeed, I'd expect a hard takeoff to be more likely.

Kavita Sharma said...

We have a dream woman who will make your dreams real. Whatever your desires are, they will be fulfilled by our Indore Escorts. You can now get nicer escort services you never got before in your life.

____???????
___??????????
___?????????????
___????????????
__?????????????
_?????????????
_?????????????
_??18+ Channels????
??????????????????????
????????18+Channels ??????
??????????????????????????
_??????__????????????????
___????____?????????????
___????_____??????????
___????_____??????????
____????____??????????
_____???____?????????
______???__??????????
_______??????????????
________??????????????
_______???????????????????
_______?????Indore Escorts ????
_______?????????????????????????
_______???????????????????????????
________??????????____?????????????
_________????????_______???????????
_________????????_____???????????
_________???????____??????????
_________???????_??????????
________???????????????
________????????????
________??????????
_______?????????
_______??????
______??????
______??????
______??????
______?????
______?????
_______????
_______????
_______????
______??????
_____????????
_______|_?????
_______|__??????

Sonam Sharma said...


Munrika Escorts Services | Book Now | Munrika Escorts Girls
Munrika Call Girls Service Book high profile Escorts. Independent escorts services in Munirka at cheap rates, Gentlemen can get their dream sexy call girls to strand Another significant benefit of searching for Munrika Escort services online is the wide variety to which the internet gives you access.
Munrika Escorts Girls
Munrika Escorts Services

Ankita Tiwari Kolkata Escort Service In Call Girl said...


EM BY PASS ESCORT SERVICE
EM BY PASS CALL GURLS

Simran Sharma said...


KOLKATA FEMALE ESCORTS
KOLKATA HOTEL ESCORTS

Quickgunz said...


3d Product Modeling | 3d Product Modeling Company In India
We at Quick gunz Pvt Ltd who deliver 3d Product modeling Services worldwide from previous couple of Decades within the market with top quality 3D product Modeling & render for our clients. We offer our greatest 3d product Modeling for Interior & exterior, 3D Architectural, Photorealistic, Industrial, 3D Medical, 3D Manufacturer offered to the highest companies across the world .

3D Product Modeling Company
3D Product Modeling

Assignment Writing Services UK said...

I found your post, I have perused your post and I am exceptionally intrigued. We lean toward your assessment and will visit this site as often as possible to allude to your assessments

Daily Word said...

I like your writing style, am really enjoying the information on this website. Good Night Message to My Sweetheart

johnrick said...

News The technology sector changes rapidly. It seems like every day a new gadget is being made available to the public. For those "techies" interested in the latest and greatest technological information, a good tech news blog or website is a great way to stay informed.
Gaming In this contemporary world, it has become liable to access each and everything with modern technologies; therefore, gaming peripherals are the best gaming devices which comfort the gamers who love to play the ideal games with easy and accessible device to play their skills.
Your Business If you think about it you can see that the news media has been evolving for some time. Business news was once only available in a newspaper that changed when television arrived on the scene.
Techy It is very important to keep yourself updated as far as the technology is concerned. You should try to get your hands of the latest technology and gadgets as soon as they hit the market.
Out Reach Outreach Galaxy is a platform that merchandise guest posts only with no limitation to minimum guest posts or any SEO metrics. Further, there’s no evaluation for home page features or any other requirements
Cricket In the sports arena, peak performance in sports has always been a much sought after state by players and coaches of all levels. Whether the athletes are school boys soccer players or Olympians striving for their Gold medals, peak performance in sports has always attracted athletes and coaches alike.
Tutors Careme Tutor” is a web based platform founded in 2018 that connects an increasing number of teachers with students for private, home sessions in the Pakistan.

Unknown said...

Useful information. Thanks for sharing.
Need Support For 123.hp.com/setup Printer? Visit our website!
123.hp.com | 123.hp.com/setup | hp envy 5070 driver | 123.hp.com/setup 2652 | 123.hp.com/setup 2752| 123.hp.com/setup 5070

123-hpsmart said...

Thanks for sharing this useful information.

Need assistance for 123.hp.com/setup printer? We are provide extensive solution for hp printer setup, driver install and wifi setup. Visit us to our website.

HP Envy 6055e Wifi Setup
HP Envy Photo 7858 Wifi Setup

Anonymous said...

Slot machines luckyclub.live - LuckyClub Live Casino
Lucky Club is a real place for fun! From scratch and progressive slots to baccarat, there's no shortage of great games 카지노사이트luckclub and promotions,

baku said...



Hey friend, it is very well written article, thank you for the valuable and useful information you provide in this post. Keep up the good work! FYI, Pet Care adda
Sita Warrior Of Mithila Pdf Download , IDFC First Select Credit Card Benefits,Poem on Green and Clean Energy

Galvanizing Flux For Industries said...

Thanks!
For The Amazing information
Buy Galvanizing Flux for Industries in Hyderabad

Procurement Resource said...

The production cost report by Procurement Resource provides the various product production cost like aluminum chloride production, chlorine production, Etc. Out Website Provide the Latest and Genuine production cost that will be very useful for you. The comprehensive report analyses the production cost of the material, covering raw material costs and co-product credit, equipment costs, land and site costs, labour wages, maintenance costs, financing charges, and the depreciation cost.

rdmedia said...

I'm very curious about how you write such a good article. Are you an expert on this subject? I think so. Thank you again for allowing me to read these posts, and have a nice day today. Thank you.Digital Marketing Course In Laxmi Nagar

MNB 3D said...

https://mandelubber.blogspot.com/2011/07/mandelbulb-3d-tutorial-render-quality.html?showComment=1701776815726#c4748642742237433697

b2b massage in chennai said...

Aromatherapy is the process of utilizing essential oils from plants in order to increase relaxation and enhance the positive effects. It is often used with both female to male massage spa in chennai centres or infused through the air.

body to body massage in bangalore said...

After your massage, take your time to slowly reawaken your body and mind. We provide a cozy relaxation area where you can sip on herbal tea, enjoy a refreshing glass of infused water, or simply bask in the peaceful ambiance before continuing with your day.

body to body massage in bangalore said...

Tension and tightness in muscles can contribute to poor posture. Massage helps release these tensions, allowing the body to return to a more natural and comfortable alignment.

erotic massage in bangalore said...

In this hectic life, we have no time to take care of ourselves, hence therapy is needed for rejuvenation and stress reduction visit izspa

Mumbai Escorts said...

Great post, Thank you for Sharing This is very nice content, well-written and informative.

b2b massage said...

Our therapy can help ease the pain and reduce the frequency of headaches

Bangalore Escorts Service said...

Thank you for this information. This blog post is very helpful or needed for future posts.

happy ending massage in hyderabad said...

Enjoy our spa services from the beautiful hot massage girl therapists, then you must be happy and feel how good your body is after the services

full night service in bangalore said...

Our full night service in bangalore center provides you with authentic services that are suitable for your body and by skilled therapists, you can also enjoy full night with hot massage girls