Capitol Fax.com - Your Illinois News Radar


Latest Post | Last 10 Posts | Archives


Previous Post: *** UPDATED x1 *** After calling on Al Franken, Mike Madigan and Mary Miller to resign, Duckworth remains silent on Menendez
Next Post: Afternoon roundup

Question of the day

Posted in:

* The setup

But when it comes to political campaigns and politics, the misuse of artificial intelligence could threaten our very democracy.

“Deepfakes” use AI to create images, sound clips and videos that appear very real but are simply manufactured. They aren’t the Photoshop photos that swap out one person’s face for another in a photo, but technology that can take anyone’s likeness and voice and create virtually any video the creator wants.

A bipartisan group of senators has introduced the Protect Elections from Deceptive AI Act, which would ban the distribution of “materially deceptive” AI-generated political ads relating to federal candidates or certain issues that seek to influence a federal election or fundraise.

It’s a good start but doesn’t go far enough. AI has become easy to use and available to anyone, including state and local politicians and their staff.

Congress should require any political ad or politically related content that uses AI to be clearly labeled as being AI generated, whether they are deceptive or not.

* The Question: Should the Illinois legislature vote to require any political ad that uses AI to be clearly labeled as being AI generated? Take the poll and then explain your answer in comments, please.

online polls


posted by Rich Miller
Tuesday, Sep 26, 23 @ 12:23 pm

Comments

  1. I voted yes. I heard the other day the late Christopher Hitchens speaking via AI about current politics. I don’t think that should be in political ads.

    Comment by Steve Tuesday, Sep 26, 23 @ 12:28 pm

  2. Voted yes, not sure how you would completely define ‘AI Generated’ and would this rule extend to stuff that was obviously faked (like JBs head on top of a bear to something).

    Would you describe it as AI or just computer-generated? It seems like a good idea, but I think the implementation is going to be a challenge.

    Comment by OneMan Tuesday, Sep 26, 23 @ 12:28 pm

  3. This is an obvious first step. We need way more regulation around AI but any step in the right direction is a good thing.

    Comment by Mr. Middleground Tuesday, Sep 26, 23 @ 12:31 pm

  4. The temptation to use it in dark funded oppo is high. The bad guys will put it out anyway, as they do now with the cruder stuff, not caring if it gets swatted, because the damage has already been done seeding the lies out into the public consciousness.

    Comment by Give Us Barabbas Tuesday, Sep 26, 23 @ 12:32 pm

  5. Absolutely. This is spiritually in line with our biometric privacy laws as well as good transparency practices.

    Voters need to know if what they are seeing/hearing is legitimate or not, and those in the videos need to be honestly represented.

    Comment by John Morrison Tuesday, Sep 26, 23 @ 12:33 pm

  6. Yep.

    Forget everything to anything on this.

    Ads need to identify “actor portrayal” in commercials… at least to seek honesty to the ad.

    So…

    Voted Yes. Easy

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:34 pm

  7. Absolutely. And if you are hesitant to disclose the use of AI, then to me that’s a clear sign your goal was to deceive.

    Comment by Montrose Tuesday, Sep 26, 23 @ 12:34 pm

  8. I voted , yes so people can know its AI and let voters decide on if they want to believe the ad

    Comment by austinman Tuesday, Sep 26, 23 @ 12:34 pm

  9. No.

    There’s just no way to define it in a way that would have a meaningful impact. Is using autocomplete when typing a caption considered ‘using AI’ under this proposal? Because it is using AI. How would labeling it as ‘AI generated’ provide any value.

    If someone can be deceived with AI, they can be deceived without it just as easily. AI isn’t some magical potion. The people consuming the ads are going to be exactly the same people no matter how you label the political ads. Read into that what you will.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 12:37 pm

  10. Yes but we don’t enforce much now or maybe politics gets more attention than crime

    Comment by Hank Saier Tuesday, Sep 26, 23 @ 12:37 pm

  11. Yes, more transparency in political speech is good. Of course, the bigger issue remains transparency in donations, most especially who donates to the dark money groups most likely to put up misleading ads.

    To the post, I would expect dark money groups to pop-up, run an AI ad that violates whatever rules are put in place, and the quickly disappear so there is no one to hold accountable.

    Comment by Pot calling kettle Tuesday, Sep 26, 23 @ 12:37 pm

  12. ===There’s just no way to define it===

    Is it altered or edited using AI?

    Pretty simple, yes or no.

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:38 pm

  13. Voted yes, but with reservations. AI content needs to be properly labeled, but also concerned about who will be making decisions to potentially flag or censor items.

    Quis custodiet ipsos custodes?

    Comment by RNUG Tuesday, Sep 26, 23 @ 12:39 pm

  14. I’ll leave it at this;

    Defending purposeful deception with AI for the good of a factual argument is an odd “truth” flex.

    Just sayin’

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:41 pm

  15. “Defending purposeful deception”

    This perfectly proves my point.

    You’ve already equated using AI with being purposely deceptive before the race has even started. That’s your own baggage you are bringing into the argument as a defacto standard that can not be questioned.

    I’ll ask it again - how is using autocomplete being purposely deceptive. That’s AI.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 12:46 pm

  16. ===You’ve already equated using AI with being purposely deceptive===

    Oh.

    You think AI editing is going to be for positive ads?

    That’s way too naive.

    Disclose. It’s not up to me or anyone to make AI a positive thing.

    Likely, people perceive it negatively because, why… positive usage?

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:48 pm

  17. ===autocomplete===

    Show me that ad, let me hear that radio ad.

    You do know what context we are discussing, no?

    It’s like an actor portraying a “doctor” telling me putting peanut butter on a blister will cure my bronchitis… “actor portrayal”… because no doctor will say it, but dress up an actor…

    “autocomplete”?

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:51 pm

  18. ===Would you describe it as AI or just computer-generated?

    That’s a good question and it might need to be altered footage or something like that instead of just saying AI

    Comment by ArchPundit Tuesday, Sep 26, 23 @ 12:51 pm

  19. “You think AI editing is going to be for positive ads?”

    Why not. It already is.

    AI is used to upscale old or damaged video recording to make it easier to see or provide a higher resolution. No deception involved.

    Again, you are bringing the automatic assumption of it always being negative as a justification.

    AI is a tool. Just like words are.

    Maybe we should label all ads using words? They can be used to deceive as well. We don’t do that, because words are a tool, deception is only one way to use that tool.

    “Parental Advisory” labels worked very well, just in the opposite way as the advocates intended it to. It became a badge to seek out, not to avoid.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 12:56 pm

  20. ===AI is used to upscale old or damaged video recording to make it easier to see or provide a higher resolution. No deception involved.===

    Just label the deception. Again, easy

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:56 pm

  21. If it’s such a positive, why not just embrace the label?

    Maybe that’s the question,

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 12:58 pm

  22. “AI is a tool.”

    It is a tool, but you can’t say that as though all tools are the same. They just aren’t. How we use/deal with/regulate AI will evolve over time as it’s prevalence grows and people become more accustom to it. Right now, a label that let’s folks know its the tool that’s being used (folks are already aware words are being used) seems like a reasonable, helpful step.

    Comment by Montrose Tuesday, Sep 26, 23 @ 12:59 pm

  23. “You do know what context we are discussing, no?”

    Yes. The context of the question is;

    “Should the Illinois legislature vote to require any political ad that uses AI to be clearly labeled as being AI generated?”

    Nothing in there defines it as only being required for deceptive ads.

    Again. You are bringing your assumption of ‘always negative’ as a default. It’s not a default, and you seem to be getting angry about questioning that assumption and not acknowledging a tool can be either good or bad. Labeling for the use of a tool alone accomplishes nothing - **UNLESS** you already come to the table with the assumption that anything AI is automatically bad. That’s a false assumption no supported by facts.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 1:01 pm

  24. I just think AI shouldn’t be a thing. We’ve been warned for years about the potential for dangers of AI, from Asimov to the Terminator. I say the same thing about cloning. Crichton had it correct, some people are so busy thinking about if something can be done, that they don’t ask if it should.

    Comment by Just Another Anon Tuesday, Sep 26, 23 @ 1:01 pm

  25. I would require it on any broadcast ad for almost anything especially if a medical commercial. As to the political ones it should require a voice over in beginning stating this is AI generated and again at end

    Comment by DuPage Saint Tuesday, Sep 26, 23 @ 1:02 pm

  26. ===Again, you are bringing the automatic assumption of it always being negative as a justification.===

    If it’s a positive, why the pushback on the label?

    ===“Parental Advisory” labels worked very well, just in the opposite way as the advocates intended it to. It became a badge to seek out, not to avoid.===

    Then those so bent in using AI should hope for the label to generate buzz, no?

    This last…

    ===AI is a tool. Just like words are.

    Maybe we should label all ads using words? They can be used to deceive as well. We don’t do that, because words are a tool, deception is only one way to use that tool.===

    Deception is the use that is the problem, if you don’t see deception as a problem with this tool… it’s why your lack of fear of words could be troubling and why any deception ad is bad… and also why this idea of “positive deception” is like … alternative facts”

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 1:02 pm

  27. Yes. Incorporating the reasons used by some many fine commenters here.

    Also, there has been some great points on issues that will need to be addressed.

    (I Robot was on over the weekend. AI consequences from the minds of the entertainment industry.)

    Comment by Norseman Tuesday, Sep 26, 23 @ 1:04 pm

  28. “as though all tools are the same.”

    All tools are the same.

    Nuclear science is a tool. It can be used for electricity or bombs.

    A knife is a tool. It can be used to cut a birthday cake, or stab an estranged relative in the chest.

    A car is a tool. It can be used to drive to a hospital for the birth of a child, or traffic a minor across state lines for illegal purposes.

    Words are a tool. They can be used to convey information useful to all parties involved, or they can be used to convince someone you are a Nigerian prince and send you money.

    If we labelled a cake;
    “This cake was created with a knife”

    Such a statement would be meaningless, unless you assumed the word knife automatically meant something negative.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 1:05 pm

  29. voted yes. Unfortunately often times the average person believes what they see. and if they are not given any heads up that this is AI, what they see can be even more lies or half truths than you normally see in campaign ads.

    Comment by Dupage Dem Tuesday, Sep 26, 23 @ 1:06 pm

  30. ===Nothing in there defines it as only being required for deceptive ads.===

    Again… Defending purposeful deception with AI for the good of a factual argument is an odd “truth” flex.

    Label it. You use it. Label it. Altered? Label it.

    ===Labeling for the use of a tool alone accomplishes nothing - **UNLESS** you already come to the table with the assumption that anything AI is automatically bad. That’s a false assumption no supported by facts.===

    Again, why the pushback on any label at all? You are assuming it will be taken as negative, something that is not supported by facts, yet… that’s your argument.

    ===getting angry===

    Don’t gaslight me to my alleged status to “anger”, I’m enjoying the defense of my thoughts, but that technique you used doesn’t need a label either.

    :)

    All good, I’m sure you will never ask where I was again, and I can understand, bud.

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 1:07 pm

  31. Voted yes but please no knee jerk actions to restrict but thought out discussions. I hope this can be done but don’t have high hope that both parties can have a sensible discussion. We will end up with many different types of laws between states that is not good.

    Comment by snowman61 Tuesday, Sep 26, 23 @ 1:08 pm

  32. “Don’t gaslight me”

    Perhaps anger was the wrong word to use, you are correct. Patronizing would probably be more accurate - in asking if I ‘understood the context, no’.

    I understand AI to be a huge universe which involves everything from spell check at the low end, with autocomplete a small step above that. A level above that is ‘color correction’ in a photo if it was taken in full sunlight or maybe moonlight instead. A level right around there is lossless compression of a digital image in jpg format where higher compression is used in some areas of the image more than others based on the content within the image being compressed.

    Yes, I understand the context. Quite fully.

    Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity. That seems… deceptive.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 1:15 pm

  33. ===Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity. That seems… deceptive.===

    So it’s not me, it’s you that see the labeling as negative.

    You wrote this…

    ===Again, you are bringing the automatic assumption of it always being negative as a justification.===

    But also, again, this…

    ===Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity. That seems… deceptive.===

    It’s your bias to the negative of the work, not my thought to it used negatively that is the concern.

    Words matter.

    So, simple fix. Altered in any AI way, label it.

    The ad on its own will be judged, altered as it is… unless you think AI isn’t altering it at all, because that isn’t the case.

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 1:24 pm

  34. I think that AI anything should be labeled if broadcast. Fraud via AI in advertising whether politically or in the sale of canned goods needs to be prosecuted.

    If someone didn’t say something or do something and altered footage shows them saying or doing that then that’s fraud and needs prosecuting harshly to shove it out of our political and all other non-entertainment spaces.

    Comment by cermak_rd Tuesday, Sep 26, 23 @ 1:26 pm

  35. I’m not sure even a label would be adequate. Political ads currently have required disclosure, yet it’s generated in tiny fonts that are illegible to many readers/viewers.

    Comment by Bull Durham Tuesday, Sep 26, 23 @ 1:36 pm

  36. Voted yes but I don’t think these ads should be allowed at all.

    Comment by Captain Obvious Tuesday, Sep 26, 23 @ 1:42 pm

  37. Yes. And heck, all AI depictions of any real person should be mandated to have clearly labeled fake warnings on screen.

    Comment by TJ Tuesday, Sep 26, 23 @ 1:43 pm

  38. ===Nuclear science===

    I’ve yet to find an instance where anything towards “nuclear” is NOT labeled.

    If you have one…

    To the question, and to intellectual property,

    Using altering AI within even a photograph, as used as an example, to alter or enhance, that’s taking the art of that photograph and first changing context to what an artist might want conveyed, or taking the moment in time and changing that to what the artist wanted shown.

    There’s a reason.. actors, writers, artists, even scientists… they don’t want AI in their workplaces or part of their processes unless it’s known as altered, and welcomed towards the art… with consent.

    You take an alter, even in a “good” framing any artist’s work (voice, image, photo, film, digital) that should be a given that those consuming are being given AI. Intellectual property isn’t arbitrary public “base points” unless the creator has a say (or should have that say)

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 1:50 pm

  39. Voted YES. AI takes CGI to a new hemisphere.

    Comment by JS Mill Tuesday, Sep 26, 23 @ 1:53 pm

  40. =how is using autocomplete being purposely deceptive=

    How do I get these fancy political ads with autocomplete?

    Comment by Joe Bidenopolous Tuesday, Sep 26, 23 @ 1:55 pm

  41. Voted no. Why bother? Still won’t be able to tell if they’re telling the truth about that or not. Just like it is now.

    Comment by Papa2008 Tuesday, Sep 26, 23 @ 1:56 pm

  42. “where anything towards “nuclear” is NOT labeled. If you have one…”

    Bananas.

    Or does your grocery store label them all as being radioactive from potassium-40?

    We already approach things this way, and there’s your specific asked for example.

    For the same reason, labeling anything as touched by AI no matter what degree it is done is equally meaningless.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 2:01 pm

  43. ===Or does your grocery store label them all as being radioactive from potassium-40?===

    Has the FDA demanded such a thing?

    Which foods *aren’t* labeled anymore?

    ===For the same reason, labeling anything as touched by AI no matter what degree it is done is equally meaningless.===

    It’s not meaningless, you say so yourself… here…

    ======Labeling any of those things is meaningless - unless your intent is to taint something before anyone even sees it with an assumed negativity.===

    You’re protecting the idea that AI isn’t deceptive.

    Friend, the “A” stands for “Artificial”

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 2:06 pm

  44. Is AI generating TheInvisibleMan’s posts?

    Comment by mrp Tuesday, Sep 26, 23 @ 2:07 pm

  45. “It’s not meaningless”

    A label would convey no useful information. What it would do, is cause some people who automatically assume anything nuclear at all equals bad - to stop eating bananas.

    The only meaning such a label would have, would be to allow people with an incorrect understanding of a topic to take an action not supported by facts.

    So in a sense, you are correct. The meaning of such a label would be in the hopes of effecting an action among people lacking a full understanding of a topic - which correct me if I’m wrong is exactly the problem you think you would be stopping with such an idea.

    “Parental Advisory”

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 2:12 pm

  46. ===to stop eating bananas.===

    That’s up to the FDA. Is the FDA regulating AI?

    ===The only meaning such a label would have, would be to allow people with an incorrect understanding of a topic to take an action not supported by facts.===

    What’s incorrect? Artificial Intelligence as used. That’s a fact.

    ===The meaning of such a label would be in the hopes of effecting an action among people lacking a full understanding of a topic - which correct me if I’m wrong is exactly the problem you think you would be stopping with such an idea.===

    You want to deceive people that AI wasn’t used?

    That’s what you are saying.

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 2:15 pm

  47. I can see a time in the not so distant future where all political ads will be AI generated to some extent and it will be assumed . Perhaps a certificate of non AI content would be more valuable.

    Comment by Jaguar Tuesday, Sep 26, 23 @ 2:15 pm

  48. “Is AI generating TheInvisibleMan’s posts?”

    Perhaps I’m just an AI programmed with a LLM heavily reliant on Alan Watts, using OW’s posts as the RLHF feedback.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 2:16 pm

  49. “Perhaps a certificate of non AI content would be more valuable.”

    I would love to see this.

    It would show the impossibility of getting such a certification. If you can get one, go for it.

    Live speeches are about the only thing which would qualify. But where would you post it? In a text image used with AI-assisted graphical processing to print and size it onto paper? Oops. Just lost that certificate.

    Comment by TheInvisibleMan Tuesday, Sep 26, 23 @ 2:19 pm

  50. - TheInvisibleMan -

    People are told they are getting artificial colors and flavors in food… it’s artificial intelligence being fed to consumers…

    If you wanna turn it on it’s artificial ear.

    ===in the hopes of effecting an action among people lacking a full understanding of a topic===

    That’s one heck of a thought.… lol… tell folks… that AI is being used… because people perceive it bad… but don’t tell folks… because keeping them uniformed is being honest to the content?

    Your selling AI as Alternate Facts

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 2:20 pm

  51. ===posts as the RLHF feedback.===

    I guarantee I’d be the one human that could make AI dumber.

    For the sake of mankind, don’t use me for rhetorical feedback.

    :)

    Comment by Oswego Willy Tuesday, Sep 26, 23 @ 2:23 pm

  52. I voted Yes

    However, this issue is larger than politics. As AI now enters the language of the day, the reality is that most Americans are addicted to the internet in some form or another (e.g., cable TV, Network subscriptions, Google and Wiki, online sources of information including news, online access to knowledge bases, etc.).

    The average American is exposed to external sources of knowledge all the time. And in that context, it would behoove the government to regulate artificially created information that is presented to citizens, in all forms by which it is presented.

    Starting with truthful labeling is an essential first step. But we also need new legislation creating mandates for governmental bodies to oversee and regulate this burgeoning source of potentially harmful information at the federal and state levels. Better to be proactive and reactive, than simply reactive.

    Comment by H-W Tuesday, Sep 26, 23 @ 2:30 pm

  53. Why this is much more dangerous than the old photoshop tricks like darkening skin color or retouching a still image. (see video) This video is old; the tech has become frighteningly more realistic since it was made… You can see the potential for a political rival to attack from the shadows and “leak” a fake with false information, run comments thru the news cycle a few times to generate buzz, then stand back and watch the conspiracy nuts take the ensuing chaos to higher levels, making for a lot of views, and no matter how many times it gets debunked, some neuro-atypical, suggestible voters will believe it and act on it. Perhaps, in deadly ways. https://youtu.be/gLoI9hAX9dw

    Comment by Give Us Barabbas Tuesday, Sep 26, 23 @ 2:39 pm

  54. I voted yes but to me this misses the real problem that perpetuates. People can lie and then hide behind freedom of speech. To me, a better answer is anytime a picture is altered in anyway or the narrative surrounding it is made up, then it needs to be labeled as fictional and if not, the punishments need to be real (starting at a minimum of $100,000 and one month in jail would be my preference).

    Comment by Lurker Tuesday, Sep 26, 23 @ 2:41 pm

  55. I voted yes because: “AI is a tool.”
    Just like a match and a flame thrower are tools. Both produce fire but ……

    Comment by don the legend Tuesday, Sep 26, 23 @ 2:52 pm

  56. The First Amendment jurisprudence needs to be revisited. Lying is protected speech, based on precedent from a time long before you could generate photorealistic animations of famous people saying things they didn’t say. I don’t think those are deserving of free speech protections any more than commercial fraud is.

    Comment by Homebody Tuesday, Sep 26, 23 @ 3:24 pm

  57. yes. out the fake.

    Comment by Amalia Tuesday, Sep 26, 23 @ 3:43 pm

Add a comment

Sorry, comments are closed at this time.

Previous Post: *** UPDATED x1 *** After calling on Al Franken, Mike Madigan and Mary Miller to resign, Duckworth remains silent on Menendez
Next Post: Afternoon roundup


Last 10 posts:

more Posts (Archives)

WordPress Mobile Edition available at alexking.org.

powered by WordPress.