• 1 Post
  • 36 Comments
Joined 7 months ago
cake
Cake day: September 13th, 2024

help-circle

  • So they are moving away from general models and specializing them to tasks as certain kind of ai agents

    It will probably make queries with those agents defined in a narrow domain and those agents will probably be much less prone to error.

    I think its a good next step. Expecting general intelligence to arise out of LLMs with larger training models is obviously a highly criticized idea on Lemmy, and this move supports the apparent limitations of this approach.

    If you think about it, assigning special “thinking” steps for ai models makes less sense for a general model, and much more sense for well-defined scopes.

    We will probably curate these scopes very thoroughly over time and people will start trusting the accuracy of their answer through more tailored design approaches.

    When we have many many effective tailored agents for specialized tasks, we may be able to chain those agents together into compound agents that can reliably carry out many tasks like we expected from AI in the first place.









  • No they will judge you as being above the law (original commenter) and they will be wrong, which doesnt matter, as long as we feel continuity with our synthesized narrative.

    Because truth doesnt matter. Our narrative just needs to be as loud as the opposition and then we can confuse people just like those in power… and then the impressionable people trying to understand whats going on or whats morally right will believe one side or the other and truth will not need to be discussed, because its not as catchy anyways.

    Then people wont need to be trusted to form their own worldview based on facts, they can neatly choose between a few curated viewpoints, and holding views from multiple viewpoints will isolate them from relevance when they are shunned for not memeing their ideologies like everyone else.




  • Theres no particular fuck up mentioned by this article.

    The company that conducted the study which this article speculates on said these tools are getting rapidly better and that they arent suggesting to ban ai development assistants.

    Also as quoted in the article, the use of these coding assistance is a process in and of itself. If you arent using ai carefully and iteratively then you wont get good results with current models. How we interact with models is as important as the model’s capability. The article quotes that if models are used well, a coder can be faster by 2x or 3x. Not sure about that personally… seems optimistic depending on whats being developed.

    It seems like a good discussion with no obvious conclusion given the infancy of the tech. Yet the article headline and accompanying image suggest its wreaking havoc.

    Reduction of complexity in this topic serves nobody. We should have the patience and impartiality to watch it develop and form opinions independently from commeter and headline sentiment. Groupthink has been paricularly dumb on this topic from what ive seen.