The Democratic National Committee was watching earlier this year as campaigns nationwide were experimenting with artificial intelligence. So the organization approached a handful of influential party campaign committees with a request: Sign onto guidelines that would commit them to use the technology in a “responsible” way.
The draft agreement, a copy of which was obtained by The Associated Press, was hardly full of revolutionary ideas. It asked campaigns to check work by AI tools, protect against biases and avoid using AI to create misleading content.
“Our goal is to use this new technology both effectively and ethically, and in a way that advances – rather than undermines – the values that we espouse in our campaigns,” the draft said.
The plan went nowhere.
Amazon will abandon it if their implementation does not meet their needs. However, the technology has already proven itself viable in the industry, and the tech will only improve with more time and funding. Someone else will begin where Amazon left off.
How has it “proven itself”?.. All the demos shown had been at least partially faked, most AI usage in the wild has produced at least some embarrassing example
At best AI has shown to be a decent programming aid
Not saying it won’t have any uses but it’s been overhyped to death…
Where I work it was given to “management” and so far all it does is record Teams meetings and produces crappy meeting notes
Warehouse automation has come a long way in the last year. The most notable benefits of AI are in cost of implementation and improvements. Previously, programmers would need to analyze every aspect of every task, and create custom software to meet the needs of the business. Every time the business has a new task, they needed to contact the developers to implement the changes.
Now, AI powered hardware works along side of the employees, learning the task on its own. Over time, it identifies and implements its own efficiency improvements.
This also applies to your own disappointing AI experience. Any successfully programmed AI will be more effective tomorrow than it was yesterday. The more exposure it has to relevant feedback data, the faster it will improve.
I’m not sure LLM model can really “learn” much… they can be tinkered for sure and more/better data fed to them… but at the end of the day, LLM models are just very smart parrots, there is no real “I” in these “AI” models
I assume government employees are talking about AI in general, not exclusively LLMs, ML, or generative AI. They don’t really know the difference in the subcategories anyway.