• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
Collapse

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Demand for AI "Surging""

Collapse

  • SussexSeagull
    replied
    Originally posted by willendure View Post

    I believe I just had my first glimpse of something that will change that with ChatGPT o1-preview. Its not perfect for sure, but a refinement of what has come before in terms of halucinating and general quality of its answers. The big change though, is that you can instruct it to do some task and it plans and sequences the steps and creates a real impression that it understands what you have asked it. I used it yesterday to write a "comprehensive" set of unit tests for some code, and it did just that. The way that it will continue to loop over a task until it is done is particularly useful. I have now seen an AI agent that seems to be capable of grasping some pretty complex requests and then doing the work. The "doing the work" bit is what is crucial to making it actually useful, because earlier LLM models operating in a more single step mode would tend to start something and then quit before completing it as they run out of tokens.

    Writing those test myself would have taken me most of the day and also the thought of doing it myself by hand was really demotivating me. The promping to get it to spit out the code was not totally straightforward, and it took me 5 or 6 rounds of prompting to get there. It is currently handicapped by not being able to access the web, so I could not get it to consume some API docs for the test toolkit I was using. I expect this limitation will be lifted once it stops being just a preview. It took me an hour or so to get what I wanted out of it, but I can also see this going more smoothly with the right knowledgebase behind it.

    This is the q-star thing that people were speculating about around the time Sam Altman was fired then reinstated. Big deal? But this is how far things have come since the LLM hype really kicked off around the start of 2023. Where will this be in 1 year, 5 years, 10 years?

    I have been reading "The Master and His Emmisary" recently, a book about brain lateralization. I learned from this that the area of the brain most closely involved with language processing is also the same brain area that is most closely involed with manual dexterity - your right hand if you are right handed, or left if left. The author speculates that this is because they both are tools for manipulating the world. In the light of that, it is fascinating to see how LLMs are beginning to learn how to think through becoming good at language.

    Automated testing is definitely an area where AI is going to flourish.

    To me this feels more like the 90s than the dotcom era. When CPUs were advancing very quickly and there cwas intense competition in technology and large sums of money were being spent on the race.
    I can conceptually understand how it can derive Unit Tests if it has access to code but if you move further up the V towards things like System and User Acceptance testing then it needs human input. In fact it would require humans who actually know what they want which can be a bit of a rarity in software development at times!

    Leave a comment:


  • willendure
    replied
    Originally posted by SussexSeagull View Post
    In the coming up thirty years I have been doing this all the tools and technologies that have come along have never really sped up the development process (although things like Automated Testing will done properly). I see nothing in AI yet that will change that.
    I believe I just had my first glimpse of something that will change that with ChatGPT o1-preview. Its not perfect for sure, but a refinement of what has come before in terms of halucinating and general quality of its answers. The big change though, is that you can instruct it to do some task and it plans and sequences the steps and creates a real impression that it understands what you have asked it. I used it yesterday to write a "comprehensive" set of unit tests for some code, and it did just that. The way that it will continue to loop over a task until it is done is particularly useful. I have now seen an AI agent that seems to be capable of grasping some pretty complex requests and then doing the work. The "doing the work" bit is what is crucial to making it actually useful, because earlier LLM models operating in a more single step mode would tend to start something and then quit before completing it as they run out of tokens.

    Writing those test myself would have taken me most of the day and also the thought of doing it myself by hand was really demotivating me. The promping to get it to spit out the code was not totally straightforward, and it took me 5 or 6 rounds of prompting to get there. It is currently handicapped by not being able to access the web, so I could not get it to consume some API docs for the test toolkit I was using. I expect this limitation will be lifted once it stops being just a preview. It took me an hour or so to get what I wanted out of it, but I can also see this going more smoothly with the right knowledgebase behind it.

    This is the q-star thing that people were speculating about around the time Sam Altman was fired then reinstated. Big deal? But this is how far things have come since the LLM hype really kicked off around the start of 2023. Where will this be in 1 year, 5 years, 10 years?

    I have been reading "The Master and His Emmisary" recently, a book about brain lateralization. I learned from this that the area of the brain most closely involved with language processing is also the same brain area that is most closely involed with manual dexterity - your right hand if you are right handed, or left if left. The author speculates that this is because they both are tools for manipulating the world. In the light of that, it is fascinating to see how LLMs are beginning to learn how to think through becoming good at language.

    Automated testing is definitely an area where AI is going to flourish.

    To me this feels more like the 90s than the dotcom era. When CPUs were advancing very quickly and their was intense competition in technology and large sums of money were being spent on the race.
    Last edited by willendure; Today, 11:21.

    Leave a comment:


  • SussexSeagull
    replied
    Originally posted by eek View Post
    The problem with AI is that the use cases don't exist yet to justify the money being spent on it - because the return really isn't there

    https://www.wheresyoured.at/subprimeai/
    I think you, and the article, sum it up well. It is an emerging technology that I believe is beginning to see some real world benefits in things like analysing scans but the amount of money being invested in something that is years, if ever, away from living under to it’s hype.

    I had an argument with a guy on LinkedIn who assured me my career as a tester was over because of AI. Some AI ‘champions’ are borderline obsessed.

    In the coming up thirty years I have been doing this all the tools and technologies that have come along have never really sped up the development process (although things like Automated Testing will done properly). I see nothing in AI yet that will change that.

    Leave a comment:


  • eek
    replied
    The problem with AI is that the use cases don't exist yet to justify the money being spent on it - because the return really isn't there

    https://www.wheresyoured.at/subprimeai/

    Leave a comment:


  • SussexSeagull
    replied
    Why is this beginning to remind of the Dot Com boom 25 years ago?

    It's obviously technology that will go on to be game changing but you need the world to come along with you.

    Leave a comment:


  • willendure
    replied
    Originally posted by dsc View Post

    Sure they are, there's loads of thick peeps just following the crowd, just look at people who backed up that women who promised to have a drug for cancer, or whatever it was, which was a) impossible b) a clear scam, yet she got tulip ton of backing from load of investors. Now of course you can dispute whether you'd call them investors in the first place as they have no idea what they are doing, but I wouldn't say the term still sticks.
    Those are speculators and largely irrelevant. The real investors in AI are more like Microsoft or Google and so on - smart people with lots of money that will invest serious capital for years or decades to shift the needle on a new technology and reap the benefits. That is where the bulk of the money is coming from, or VC funds, banks and hedge funds but they of course have technical consultants also that understand the tech and have ideas about the "serious applications".

    Leave a comment:


  • dsc
    replied
    Originally posted by willendure View Post

    Those are not "investors".
    Sure they are, there's loads of thick peeps just following the crowd, just look at people who backed up that women who promised to have a drug for cancer, or whatever it was, which was a) impossible b) a clear scam, yet she got tulip ton of backing from load of investors. Now of course you can dispute whether you'd call them investors in the first place as they have no idea what they are doing, but I wouldn't say the term still sticks.

    Leave a comment:


  • sadkingbilly
    replied
    Originally posted by willendure View Post

    Those are not "investors".
    they're cleverly placed 'chatbots'??

    Leave a comment:


  • willendure
    replied
    Originally posted by dsc View Post

    I agree, but at the same time think all the hype is driven by the party tricks and the idea that soon chatbots will do most of the work. Lets be honest, investors are not clever enough to understand that AI is used in serious applications and what the future might bring, they probably have "clever robots do chu chu" on their minds.
    Those are not "investors".

    Leave a comment:


  • dsc
    replied
    Originally posted by willendure View Post
    Look at something like Cerebras which sold its systems to G42 healthcare - they want AI to predict how drugs will behave and also to invent or help invent novel drugs - thats generative right there. So while chatbots are at the party trick end, similar generative AI technology is being used for serious applications. Plenty things being developed out there that are way too complex and involving hard science and engineering for most people to appreciate.
    I agree, but at the same time think all the hype is driven by the party tricks and the idea that soon chatbots will do most of the work. Lets be honest, investors are not clever enough to understand that AI is used in serious applications and what the future might bring, they probably have "clever robots do chu chu" on their minds.
    Last edited by dsc; 31 August 2024, 14:02.

    Leave a comment:


  • jamesbrown
    replied
    Originally posted by willendure View Post
    Look at something like Cerebras which sold its systems to G42 healthcare - they want AI to predict how drugs will behave and also to invent or help invent novel drugs - thats generative right there. So while chatbots are at the party trick end, similar generative AI technology is being used for serious applications. Plenty things being developed out there that are way too complex and involving hard science and engineering for most people to appreciate.
    I think I said the same thing above . I agree, the underlying models span many worthwhile applications, but chatbots capture the public consciousness because "oh, look at the clever robot!"

    Leave a comment:


  • jamesbrown
    replied
    Originally posted by willendure View Post

    Yes, but I see them as a forward evolution of the earlier models, not as 2 parallel branches.
    Who said anything about parallel branches?

    Leave a comment:


  • willendure
    replied
    Look at something like Cerebras which sold its systems to G42 healthcare - they want AI to predict how drugs will behave and also to invent or help invent novel drugs - thats generative right there. So while chatbots are at the party trick end, similar generative AI technology is being used for serious applications. Plenty things being developed out there that are way too complex and involving hard science and engineering for most people to appreciate.

    Leave a comment:


  • willendure
    replied
    Originally posted by jamesbrown View Post

    It's you that made this distinction above, "earlier AIs were more about prediction and pattern recognition"
    Yes, but I see them as a forward evolution of the earlier models, not as 2 parallel branches.

    Leave a comment:


  • jamesbrown
    replied
    Originally posted by willendure View Post
    I don't even really think you should think of predictive and generative as 2 separate branches. The new techniques invented for generative will filter back in to the other. Generative has predictive aspects also - literally all LLMs do is predict the next word anyway.
    It's you that made this distinction above, "earlier AIs were more about prediction and pattern recognition"

    The underlying mathematical techniques are largely the same. For example, deep learning techniques are used for chatbot content creation and weather forecasting, alike. However, prediction and content creation are subtly different problems. For example, prediction is largely about estimating conditional probability distributions, i.e., "given X=x, Y=y, and Z=z, what is the set of possible outcomes of Q and their associated probabilities?", whereas generation is more about understanding all possible combinations, aka joint probability, and sampling novel outcomes from joint distributions.

    Anyway, my point remains, namely that chatbots are at the party trick end of the AI spectrum, not in terms of the methods used, but in the way they are being applied. Chatbots are an application of a technology. Prediction remains central to scientific applications of AI where there's typically a set of (e.g., observed) initial conditions that constrain the problem.

    Leave a comment:

Working...
X