tag:blogger.com,1999:blog-11168555.post8532670983102225579..comments2024-03-28T03:22:24.202-04:00Comments on The Multiverse According to Ben: Creating Predictably Beneficial AGIBenhttp://www.blogger.com/profile/12743597120529571571noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-11168555.post-43892681197901522982022-06-01T06:38:02.262-04:002022-06-01T06:38:02.262-04:00It is really a great post by you.
Making task is o...It is really a great post by you.<br />Making task is one of the pieces of a professional education that gives a superior comprehension of the subject to the understudy. The greater part of the understudies stuck while making their task. Are you additionally one of them. In the event that indeed, there are numerous task creators in market who will give assistance you in making your tasks and assist you with achieving passing marks.<br />A wide range of subjects where understudies generally look for online task help are <a href="https://www.sampleassignment.com/diploma-courses/sitxfin003-assessment-answers.html" rel="nofollow">SITXFIN003 assessment answers</a> Help, Management Assignment Help, bsbdiv501 appraisal replies, and so on.<br /><br />https://sampleassignmenthelper.wordpress.com/2022/05/31/importance-of-a-balanced-diet/<br />Anton Dcruzehttps://www.blogger.com/profile/13693873544151826859noreply@blogger.comtag:blogger.com,1999:blog-11168555.post-76877069380215465712010-04-29T00:28:39.746-04:002010-04-29T00:28:39.746-04:00Let's say I build a knowledge representation p...Let's say I build a knowledge representation pool (e.g. some relational database) and a translation engine (e.g some searching modality), such that any human who wants to ask any question that is not NP-hard (e.g. which way to Venus? How many photons were emitted from the sun yesterday? How long until my flowers bloom? Note: In the case of the final question, the AI might have to specify a bounded region of time, assign probabilities, and use some kind of heuristic to supply a response, and qualify that response with references to indeterminacy) can get a short, intelligible answer immediately transmitted to his or her mind.<br /><br />By arranging a relation between the mind of this AI and the mind of the user, you have to account for the interaction between the user and the AI. Even if the AI is passive, and only responds to queries, by empowering the user with knowledge, the AI can become the impetus for unfriendly effects in the ecosystem. There are suggestibility effects. The AI has to handle cases, the AI has to know something about the user before the AI supplies the user with information, and the AI has to have goals and constraints in place to prevent certain outcomes deemed to be unfriendly.<br /><br />The deeming of a particular case scenario as unfriendly will vary with the moral framework of the host, which will possibly be modified by the host's shock levels. A post-conventional shock level 5 Dr. Manhattan may not regard the destruction of all life on earth as of superior moral significance to the decay of a radioactive isotope in a box in an imagined universe. A Virindi Master might tell you that your epithelial tissue is most supple, and offer to buy it from you. You might nararate your a stream of consciousness to your mother containing the perceptions of exploited animals around the world, and cause her to go into epileptic shock and begin talking to an invisible being. You have to respond to these kinds of situations on a case-by-case basis until some general rule can be formulated which allows you to handle all cases. <br /><br />Since the suggestibility of individuals is variable, you have to have some understanding of those individuals' minds.<br /><br />This bridges me to a discussion of paradigms. If you observe the socioeconomic structure of a society, you can make inferences about the sociocultural orientation of the groups, and the tacit epistemological assumptions made by individuals in those groups. By visiting a particular individual, and eliciting information from this individual (for example, by shocking the individual, as Dirk Gently did to Richard Macduff, by informing him that he was a suspect in a murder case), one is able to become percipiently aware of the individual's suggestibility, well enough even to prompt the individual to make decisions like, strip naked, jump off a bridge, and tread water until there is no more energy to tread water with, or some condition precedent to the exit of the loop is fulfilled. <br /><br />As a species, and as individuals, we have to become aware of our percipient interests, our real interests. Hypothetically, these interests can be identified in a boolean fashion, for example, there may have been a transaction or occurrence which gave rise to life on earth. Let's say that a space ship blew up 4.2 billion years ago, and there is a ghost that is attempting to go back in time to cause the spaceship to not blow up. If that ghost fulfills its mission, we cease to exist. So if survival of our species is one of the criteria for friendliness, then a friendly AI would probably identify the intentions of any entity that wishes to adversely impact our survival, and setup strange attractors to divert that entity's consciousness away from any event in spacetime that could be modified to adversely affect our survival interest. I am interested to know what you think are the real interests that our species experiences, Ben.<br /><br />This is an un-edited comment, as I have work and play to do.Sean Danielshttps://www.blogger.com/profile/10872317564125089048noreply@blogger.comtag:blogger.com,1999:blog-11168555.post-87187114322046544482010-03-15T06:39:36.793-04:002010-03-15T06:39:36.793-04:00Eliezer,
Sorry the blog post gave the impression ...Eliezer,<br /><br />Sorry the blog post gave the impression of "arguing against a straw man"; I've revised the wording so as to incorporate your comment and (hopefully) no longer give that impression!<br /><br />Of course, the main point of my post was not to argue against anything but to suggest an alternative approach...<br /><br />-- BenBen Goertzelhttps://www.blogger.com/profile/01289041122724284772noreply@blogger.comtag:blogger.com,1999:blog-11168555.post-55043842258238298062010-03-15T06:20:55.055-04:002010-03-15T06:20:55.055-04:00The putative proof in Friendly AI isn't proof ...The putative proof in Friendly AI isn't proof of a physically good outcome when you interact with the physical universe.<br /><br />You're only going to try to write proofs about things that happen inside the highly deterministic environment of a CPU, which means you're only going to write proofs about the AI's cognitive processes.<br /><br />In particular you'd try to prove something like "this AI will <i>try</i> to maximize this goal function given its beliefs, and it will provably preserve this entire property (including this clause) as it self-modifies".<br /><br />So, yes, I'm afraid you were arguing against a bit of a strawman here.<br /><br />-- Eliezer YudkowskyEYhttps://www.blogger.com/profile/00513892514872819332noreply@blogger.com