• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "More intelligent than some on this forum ..."

Collapse

  • Zippy
    replied
    Originally posted by Troll
    OK – but your suggestion that machines will have the capability to 'evolve over time' - what would be the basis of this evolution? - with creatures, natural selection works against biological reproduction, pleasure has evolved as a mechanism to prompt animals to reproduce, those that can attain maturity in their environment and reproduce win the race & 'evolve'.

    Robots cannot use the same mechanism, even with consciousness they would just manufacture other robots - what would be the stimulus for change (evolution).

    You could argue that the only stimulus would be human R&D efforts as any robot that is 'self aware' would consider itself perfect and thus lose any impetus in 'improving' - ergo once mechanical consciousness has evolved, humans are reduced to the role of drones

    This is a really interesting thread - shame we don't get more of these

    Evolution doesn't really have a 'basis'. Random mutations just try out new characteristics and see if they work. You also have to remember that species that 'fail' will probably get a few hundred (thousand!) generations before they are wiped out - or change. I really don't see why evolution should not also apply to the 'mechanical' world.
    There also seems to be an implicit assumption that robots should be human like(??). My cat doesn't appear to feel that way (he just gets on with being a successful cat) and he's reasonably intelligent - so why should a robot be human like?

    We can look at the research. You have the 'building a brain' mob - (Frankensteins in my opinion) but they are trying.
    You have the evolutionists - genetic algorithms, neural networks etc.
    Out of these strands of research we have developed new species like 'Machine that is exploring Mars' for instance - the Mars Rovers needed both a priori knowledge and the ability to adapt in order to succeed (and not at all human).

    The last paragraph of Troll's post seems to assume that once a species has achieved what it considers to be the top of the evolutionary tree, then it will cease to evolve. This cannot be true - we species have no choice in this, we simply cannot cease to evolve as evolution is not (yet!) a conscious act. We can to a certain extent manipulate genes but we don't yet know how that will affect the long term viability of our species.

    If robots are intelligent then surely they have to be conscious so won't they be subject to the same rules as all of us? (It has to be said that I define intelligence as more than 'I know loads of stuff'). Humans think beyond their own species, so, as the next step on the evolutionary ladder, wouldn't intelligent, 'superior' artificial life forms think in a similar way? And if these creatures don't evolve but are built, why would any human create one that thought itself 'perfect'? And - if they were created - they would not be easily adaptable so their actions could be predicted by us inferior humans.

    I probably haven't put the above arguments in the best way (the, ahem, festive spirit still has a hold ) but somebody please come back with some counter arguments/other ideas as it is so interesting. I feel the AI world is in a very interesting place - the Creationists (build intelligence) vs the Darwinists (sh1t happens) - and it's just a shame it seems to have got stuck there for a while.

    Leave a comment:


  • IR35 Avoider
    replied
    A bit off-topic, but I recently bought a book on "happiness" while passing through an airport, mainly a philosophical/educational work rather than a self-help tome, and it made the point that of the six/seven basic human emotions, all except joy were negative and required a specific response (e.g fear - run away from the tiger) whereas for joy the correct response was "do nothing" i.e. allow the present circumstances to continue. That negative emotions require tailored reactions explains the opening lines of Tolstoy's Anna Karenina "Happy families are all alike; every unhappy family is unhappy in its own way."
    Last edited by IR35 Avoider; 29 December 2006, 13:05.

    Leave a comment:


  • Troll
    replied
    Originally posted by sasguru
    You don't code a system to feel pleasure. The idea is a you code a system with simple rules that allow it to evolve over time to get conscious and perhaps then it evolves to feel pain and pleasure - whatever those concepts are (and we all know they are relative)
    OK – but your suggestion that machines will have the capability to 'evolve over time' - what would be the basis of this evolution? - with creatures, natural selection works against biological reproduction, pleasure has evolved as a mechanism to prompt animals to reproduce, those that can attain maturity in their environment and reproduce win the race & 'evolve'.

    Robots cannot use the same mechanism, even with consciousness they would just manufacture other robots - what would be the stimulus for change (evolution).

    You could argue that the only stimulus would be human R&D efforts as any robot that is 'self aware' would consider itself perfect and thus lose any impetus in 'improving' - ergo once mechanical consciousness has evolved, humans are reduced to the role of drones

    Leave a comment:


  • sasguru
    replied
    Originally posted by threaded
    Many people can blot out pain just by will power etc. etc.

    Most cannot, but they are often throw backs anyhow.
    Let me guess. You learnt to block out pain when you served in the special forces?

    I would wager that I could break you

    Leave a comment:


  • threaded
    replied
    Originally posted by sasguru
    Isn't that what has happened to humans? Much of what was useful in our evolutionary past is useless now, yet even if we are concsious of this intellectually, we can't change the way we feel or behave.
    Many people can blot out pain just by will power etc. etc.

    Most cannot, but they are often throw backs anyhow.

    Leave a comment:


  • sasguru
    replied
    Originally posted by threaded
    If as you possit a machine intelligence that becomes "self" aware, will it also not therefore discover that because it and the world isn't perfect and constantly changing then it is inevitable that it will suffer if it continues to use such concepts.
    Isn't that what has happened to humans? Much of what was useful in our evolutionary past is useless now, yet even if we are concsious of this intellectually, we can't change the way we feel or behave.

    Leave a comment:


  • threaded
    replied
    If as you possit a machine intelligence that becomes "self" aware, will it also not therefore discover that because it and the world isn't perfect and constantly changing then it is inevitable that it will suffer if it continues to use such concepts.

    Leave a comment:


  • sasguru
    replied
    You don't code a system to feel pleasure. The idea is a you code a system with simple rules that allow it to evolve over time to get conscious and perhaps then it evolves to feel pain and pleasure - whatever those concepts are (and we all know they are relative)

    Leave a comment:


  • Troll
    replied
    If you can code a machine to detect/sense/experience pain, could you similarly code for pleasure?

    If you accept that damage/pain sensing is of benefit to robots as a self preservation mechanism, what would be the corresponding reason for coding for pleasure.

    And before anyone goes down the Cherry 2000 route – in this context pleasure is as received…. & yes, I’m very,very bored

    Leave a comment:


  • Mailman
    replied
    My gawd, I have created a monster!

    And some people wonder where the PC brigade comes from!

    Mailman

    Leave a comment:


  • sasguru
    replied
    Originally posted by VectraMan
    I didn't expect a kind of Spanish Inquisition.
    He he. Sorry, I do like my precision.

    Consciousness is an interesting topic though ...

    Leave a comment:


  • VectraMan
    replied
    Originally posted by sasguru
    You're missing the point. We were talking about giving robots rights. In so far as they "feel" pain, that would be correct. But should the car above be given rights because it has "sensed pain"?
    I was responding too Ardesco saying "why would you code something to feel pain", and to be honest was just arguing for the sake of it as I was bored. I didn't expect a kind of Spanish Inquisition.

    Leave a comment:


  • sasguru
    replied
    Originally posted by VectraMan
    If it was pleasurable we wouldn't move away/run/stop doing it/see a doctor etc. When modern cars detect a fault they reduce power and go into "limp home mode" which isn't that far removed from how our bodies react to injury. To what extent you can say the car feels pain is a matter of philosiphy, but if you pick up something hot you don't go through an intelligent consicious process to reach a decision about what to do, you drop it instictively. In that way we're no different to a machine that's been programmed to drop something in response to a heat sensor to protect itself from injury.
    You're missing the point. We were talking about giving robots rights. In so far as they "feel" pain, that would be correct. But should the car above be given rights because it has "sensed pain"?

    Leave a comment:


  • VectraMan
    replied
    Originally posted by sasguru
    Indeed. But why do we associate feelings of unpleasantness with it?
    If it was pleasurable we wouldn't move away/run/stop doing it/see a doctor etc. When modern cars detect a fault they reduce power and go into "limp home mode" which isn't that far removed from how our bodies react to injury. To what extent you can say the car feels pain is a matter of philosiphy, but if you pick up something hot you don't go through an intelligent consicious process to reach a decision about what to do, you drop it instictively. In that way we're no different to a machine that's been programmed to drop something in response to a heat sensor to protect itself from injury.

    Leave a comment:


  • threaded
    replied
    Originally posted by sasguru
    Indeed. But why do we associate feelings of unpleasantness with it?
    Because you have not achieved enlightenment.

    Leave a comment:

Working...
X