さんだーさんだ!(ブログ版)

2015年度より中高英語教員になりました。2020年度開校の幼小中混在校で働いています。

Inside the AI factory

embed.podcasts.apple.com
↑こちらのポッドキャストを、先日書いた↓こちらの文字起こしアプリで起こしてみた。
thunder0512.hatenablog.com

長いので、主な注意点を最初に🙏(GPT-4 APIの使用でだいぶよくなったと思いますが…)

  • [mm:ss]というタイムスタンプは、入っているところと入っていないところがあるかも。
  • 「-Japan(日本)」「-Barak Obama(バラク・オバマ)」があるかも。単語帳作成のためのプロンプト中に例として挙げたものが紛れ込んでいる可能性があります。
  • proofreadについてなど、ChatGPTへのプロンプトを本文と誤認している場合もあるかも。
  • その他にも日本語訳が抜けているなど不完全な部分は多々あるかと思いますが、基本的にはChatGPTの限界だと思ってご了承ください。。
  • 誤訳や不完全な文字起こしがあったとしても、なんの責任も負えませんので、ご自身で確かめながらご利用ください。

We're used to thinking of artificial intelligence as knowledge generated by machines. You can get chat GPT to write an email for you. "I hope this email finds you well. Blah blah blah blah blah." You can ask Midjourney what Pope Francis would look like in a puffer jacket. "Can I say something without you guys getting mad? But it turns out there's a vast network of human labor powering AI. There are people training AI every day, sometimes all day, just clicking away on images, on pixels, so that the AI can get better at identifying things the way we humans do. We're going inside the AI factory on Today Explained."

Right now, the biggest song of the year so far, is a country song, "Last Night" by Morgan

[1:05]

Whalen. But in spite of country's dominance, country radio, by the numbers, is still more than 99% white. What is it about country music that has not changed the way rock music has changed, the way hip hop has changed, soul, R&B? How is that possible? Was that feasible? We talk about country's race problem. This week on Intuit, Vulture's Pop Culture Podcast. You are listening to Today Explain. I'm Sean Ramos Forum and I'm joined by Josh Jezza from The Verge, who just wrote a big piece about the people behind artificial intelligence.

"It is about the human labor behind artificial intelligence, you know, it's often said that AI learns from data, it finds patterns in data, but that data has to be curated, sorted, labeled, sometimes made by humans. So I wrote about those humans."

[2:09]

"Something called data annotation, sometimes data labeling, the work is pretty weird and there's a huge range in what you might be doing, let's say you log on to your platform, and you might be labeling clothes in social media photos. You might be sorting TikTok videos based on whether they're fast-paced or slow-paced or something. Or you might be like labeling food and saying like, 'yes, that's Diet Coke.' Or you might be looking at chatbot responses and saying this is incorrect or this is profane, or too long or totally off the wall.

"So there's a huge range in the types of jobs you might be doing. What they have in common is they tend to be sort of small, like there's one thing you're doing over and over and over, and also have extremely high-quality standards. Like, let's say you are outlining vehicles or something like that, you have to outline

[3:12]

it to the pixel. So is this like the kind of thing that I do when I'm trying to log into a website and it's like— 'How many of these pictures have cars in them?' Exactly, it's a lot like Captcha. That was actually a method and still is a method of kind of getting this work for free. You know, by definition, it's something AI can't do yet. So when I do a Captcha, I'm helping the backend of some website train AI? Exactly.

"And if you may, you may have noticed over the years that Captchas have gotten harder. That's because the AI has gotten better. So you need blurrier, weirder images to sort of raise the bar and also to improve AI. So in a way, you and I and all of us are AI annotators. Yes, yeah. And annotators are just people who do it you know full time for pay. Did you see people doing this kind of work?

"So I did this kind of work. You did it yourself? Yes, I did it myself as a way to meet people who are doing this kind of work. It's all online for the most part.

[4:14]

"So did you like apply for a job? Did you cheat on The Verge? I made all of a $1.50, I think. But yeah, so it was my second job for a couple months, but the application is very easy. You just have to speak English and have an e-mail address, and you fill out some basic information and then you'll get a welcome e-mail, and you're invited into a Slack channel, and then you have to start training to actually work. You have to learn what data annotation is and then do a training module for each task, a video game.

"So, these courses are like instructions. So you have to read them carefully and understand each and every bit of it. And those instructions can come with scenarios, and they can come with some questions or quizzes. A project has let's say three or four courses, you have to study the first one, you finish it, go to the next one and so forth. Now, you can start working on this thing for money."

[5:17]

"It gave me like a day. What was your shift like? So, it was extremely difficult. You know, I thought I was gonna kinda log in and see what kind of jobs were out there and, you know, get into these channels and move fast it's fairly quickly. But I kept flunking the training for the first task I would try to do. I can give you an example. Like one of the early ones I was doing was just labeling clothing, and the instructions were something like, "label the items of clothing that are real clothes that can be worn by real people" or something like that, which is, seemingly, quite self-explanatory so I just sort of clicked, proceeded past the instructions and got started and failed immediately.

"One of the things that tripped me up at first was like, there was a magazine that had some photos of clothes in it, I was like, "well, that's not, you know, you can't wear a magazine" but, like, to an AI, these systems are really literal. They're not very smart and so it's all just pixels. It doesn't understand what a magazine is or what a reflection is. And so you need to label images of clothes and reflections of clothes and mirrors and

[6:23]

things like that. And so that was sort of the first curveball. But then it just goes on from there. It's like label costumes but not suits of armor and where you draw that line is the difference between, you know, having a job and getting fired. These sorts of weird distinctions that get drawn. The full instructions were over 40 pages and you have to kind of keep referring back to those as you do your work.

"You talk about failing, do you still get paid if you fail? No, I mean you'll get paid for the task that you completed, but then you just get booted out. It says your low quality has, you know, gotten you suspended from this task, and you have to go back and start training again on some new thing and try to qualify. Wow, so it's really in your interest to read the instructions it sounds like."

"Yeah, and workers, I found, because the instructions are not well-written, they're just inhumanly complex, and so they end up teaching each other, doing a lot of free labor, honestly, doing YouTube tutorials or Google meets where they try to teach each

[7:25]

other what these instructions actually mean. Is it steady work, Josh? Do you get as many tasks as you want? Is it like dependable income?

"No. So this is one of the things that surprised me. I mean it's obviously unsteady at the level of, like if you don't read the instructions really carefully and you do something wrong, you're going to get banned. And so that is very precarious. But it's also just unsteady even if you're the best annotator in the world.

"There's a really spikey demand for this sort of work. There will be a period where there's a bunch of well-paying tasks on there, and you can work as much as you want, and then they'll disappear, and you don't know why. And you have no work or you can only do tasks for a penny or something like that, and then they'll come back. I spoke to a lot of people, and people were frustrated at the low pay.

"Even more than that, people were frustrated that it's steady enough, that you can almost depend on it, but not enough that you aren't constantly without work."

[8:27]

"I talked to people who developed these habits of waking up every three hours in case something well-paying appeared, and then if there was staying up for 36 hours straight, just sort of labeling. Talked to one guy who was just labeling elbows and knees. He didn't know why, but it was paying well. He just wanted to do it while it lasted because then you might be out of work for a week. Elbows and knees?"

"Yeah. There's a lot of stuff on there that you just have no idea what it's for, and that was one of them where it was just like photos of crowds and it was like 'label all the elbows and knees.' So okay, so you're just sitting there labeling elbows and knees for 36 hours straight for how much money?

"It's super variable, each task pays some amount of money but the workers I talked to they were getting paid for something like that, like a couple bucks an hour, as low as $1 an hour. It cannot pay all the bills. It's a side hustle, maybe just one bill for the internet bill and that's it. Wow. Do they have any idea why they are labeling elbows and knees for $1 an hour, potentially

[9:35]

36 hours straight? No."

★ここまでの要約・日本語訳★

  • Artificial intelligence (AI) relies heavily on human labor, with people training AI by annotating data, which can be anything from labeling clothes in social media pictures to sorting TikTok videos.
  • Despite the dominance of country music, the genre is largely represented by white artists who make up 99% of country radio.
  • People involved in AI preparation work on diverse tasks with strict quality standards and absolute precision is required, often similar to the Captcha tasks seen online.
  • The work of data labeling is unpredictable with a volatile demand, leading to irregular income and uncertain job stability.
  • The tasks can be strange as well as laborious, and sometimes without clarity about their ultimate purpose, with workers paid low wages for work that is often repetitive and dull.

  • 人工知能(AI)は、大きな部分で人間の労働力に依存しています。それは、ソーシャルメディアの写真で服をラベル付けすることから、TikTokのビデオを整理することまで、データを注釈することによってAIを訓練する人々が日々活動しています。
  • カントリーミュージックが主流にも関わらず、このジャンルは白人アーティストによって大部分が表現されており、カントリーラジオの99%を占めています。
  • AIの準備作業に携わる人々は、オンラインで見かけるCaptchaタスクに似た、厳格な品質基準を持つ多様なタスクに取り組んでおり、絶対的な精度が求められます。
  • データラベル付けの仕事は予測不能で、需給が変動するため、不規則な収入と不確定な仕事の安定性をもたらします。
  • そのタスクは奇妙でありながらも労働集約的で、最終的な目的が明確でないこともあります。労働者たちは、しばしば反復的で退屈な作業に対して低賃金を払われます。

★ここまでの特徴的な固有名詞・英単語・英語表現★
【固有名詞】

  • Artificial Intelligence (人工知能)
  • GPT (Generative Pre-training Transformer:発展的な前処理変換器)
  • Midjourney (ミッドジャーニー)
  • Pope Francis (教皇フランシス)
  • Last Night (ラスト・ナイト)
  • Morgan Whalen (モーガン・ホエールン)
  • Today Explained (トゥデイ・エクスプレインド)
  • Sean Ramos Forum (ショーン・ラモス・フォーラム)
  • Josh Jezza (ジョッシュ・ジェザ)
  • The Verge (ザ・ヴァージ)
  • Intuit (インテュイット)
  • Vulture's Pop Culture Podcast (ヴァルチャーズ・ポップカルチャーポッドキャスト)
  • TikTok (ティックトック)
  • Diet Coke (ダイエットコーク)
  • Captcha (キャプチャ)

【英単語】

  • knowledge(知識)
  • email(メール)
  • labor(労働)
  • images(画像)
  • pixels(ピクセル
  • dominance(優勢)
  • scenarios(シナリオ)
  • vehicles(車両)
  • suspended(停職)
  • tasks(タスク)
  • qualify(適任である)
  • instruction(指示)
  • comments(コメント)

【コロケーション】

  • get better at(〜を上手になる)
  • look like(〜に見える)
  • get mad at(〜に怒る)
  • following the instructions(指示に従う)
  • train AI(AIを訓練する)
  • log on to(〜にログインする)
  • sorting TikTok videos(TikTokのビデオを分類する)
  • make sense of(〜を理解する)

Well, they know they're training AI and they know it's for some company, but they don't know whose AI or what they're training it to do unless they can kind of guess that, you know, it's a self-driving car or something.
But the elbows and knees?
No, they don't know.
There are just layers and layers of anonymity in the system.
So like each project...
Like, all they know about the platform is that it's called RemoTasks.
And then each project is named something totally cryptic like Pillbox Bratwurst or something.
Just like non sequitur codenames.
And so they have no idea what it's for really.
In a minute on today explained what and who all this labeling is for really.

[10:40]
Listen, even if you don't have student loans yourself you probably know someone that does, which means you also know the system is broken.
But have you ever wondered how exactly it got this way?
There's a high demand for college because of this huge economic shift in the US economy and the global economy.
Then you have this like free flowing access to credit.
And then you had uninformed consumers.
This is just a recipe for disaster.
Your student loan debt and then some explained.
That's this week on the weeds listen wherever you get your podcasts.
Why did the elbow cross the road?
Let me tell you, it's quite humorous.
Today Explained, we are back with Josh Jezza.
Josh, you just told us that these people

[11:42]
who are slaving away, training AI 36 hours straight, a dollar an hour, whatever it is, they don't know exactly what they are doing it for.
Do you know what they're doing it for?
So AI needs tons of examples to learn from, and so autonomous vehicles is a great example of something where this thing is out in the world steering around a multi-ton piece of metal, the stakes are really high.
You can't have it get confused, it's super dangerous.
There was a case a couple of years ago,
where an Uber self-driving car killed a woman in Arizona.
It could recognize pedestrians, it could recognize bikes, but it struggled to figure out what was happening with a person walking a bike along a street, not near a crosswalk, like didn't have enough data on it.
So the demand for data for self-driving cars is super high.
If you think about how many times you're driving and you go past construction or just something, unexpected happens, you need to have data on it.

[12:42]
So there's thousands and thousands of people whose job it is to get data from these cars and go through and say, here's a pedestrian, here's a traffic cone, here's a pothole.
That's basically how it works with any machine learning system.
Whether it's language or image recognition, you need training data and you need someone to make sure it's the right training data and to put tags on it, to provide that human input.
And where are these data annotators based typically?
They're all over the world because you need so much of this data.
The pay tends to be fairly low.
And so you have a lot of people in India, the Philippines, Kenya is a big hub, Venezuela.
Because you often get paid in US dollars and so if there's a place where the currency is crashing and people can do the work and there's fast internet, the work tends to go there.
Since I'm in Kenya, Africa, we get paid, I think, one to two dollars an hour, which is

[13:44]
pretty low.
You can say it's just a sales hustle because you cannot cater for your basic needs, whether it's a phone bill or the rent, yeah.
How long have we been outsourcing our data training?
It's been at least a decade, probably more.
One of the turning points happened in the late 2000s.
You've always needed some form of data curation, but before that it was often done by a researcher and their grad students or something.
But with increasing computational power, it became possible to train on more data.
So in the late 2000s, you have people start to use labeled data sets of millions of images instead of a couple of thousand.
We downloaded nearly a billion images and used the crowdsourcing technology, like Amazon Mechanical Turk Platform, to help us label these images.
When you reach that scale,

[14:45]
people start going overseas because you need people who will work for less.
Together, almost 50,000 workers from 167 countries around the world helped us to clean, sort, and label nearly a billion candidate images.
Will the need for these data annotators eventually dry up?
Is this job sort of a finite experiment?
There are different views on that.
There's certainly people in the AI industry who think we're going to reach a breakthrough where the AI is going to be so smart that it doesn't need human input anymore.
It's going to become super intelligent.
There are a lot of other people who disagree with that, and certainly historically what has happened is annotation is always kind of getting automated.
Like if you look at those early image recognition systems, that's automated. AI can tell

[15:45]
the difference between an image of a cat and a dog.
But it enables new technologies like self-driving cars.
Now you need even more people doing even more and more complicated forms of annotation,
and that has been the way it's gone,
and you can certainly see a world where these language models are out in the world and
there are all the things they're supposed to be doing like giving health advice or legal advice
are complex changing high stakes fields and you're going to need even more human annotation there.
Is this future of perpetual human collaboration with AI going to lead us to some ideal where
the cars will drive themselves perfectly?
Or the, I don't know, the robot doctors will know my knee from my elbow.
I guess I should talk about sort of how brittle these systems are.
That's the word that used to describe their knowledge and state of their knowledge.
When you're training something to be accurate, for example, you have people who are rating it for accuracy but one, maybe

[16:46]
they're not rating it correctly because it's very time consuming
and often impossible to fact-check every written response,
often responses are open to interpretation or just too complicated, and two you don't know that it's learning the right patterns as opposed to learning to talk like whatever text people have labeled as accurate sounds like.
So one of the risks that I think we're seeing now is it's become this language models particularly have become extremely good bullshitters.
You may have seen the case of the CHOTCPT lawyer who submitted some legal filings, citing cases that he asked CHOTCPT for.
The lawyer cited more than half a dozen relevant court decisions to make his case for why the lawsuit had precedent.
The only problem, none of those decisions were real.
The program even reportedly told him yes when he asked it to verify that the cases were legitimate.
Sounds like a trash lawyer though, honestly.

[17:48]
Yes, I would certainly not consult ChatGPT for legal advice.
The question is will it ever get there, if you just throw enough annotation at it, enough data at it, is there going to be a point where it learns what is true or false or what the legal reasoning or something like that, or is it going to continue to just be a better and better mimic and you're always going to have that possibility that it's going to make some catastrophic error.
That's an open question, and also, as an open question of how you're going to have people who can continue to oversee these models as they get so good at mimicking people.
Yeah.
Right, like, you need a very good lawyer all of a sudden who can critique an AI model that is good at, you know, making up legal advice.
And what about the other side of this?
Just like the treatment of workers?
I mean you mentioned people working 36 hours straight.
If Google might be behind the contract job that someone in Kenya has that's paying them

[18:49]
a dollar an hour to annotate elbows, are they cool with working people like that, 36 hours straight, for, like, a dollar an hour?
BRANDON BURGERSTOMP: That is a question for Google.
But I can say that some of their annotators in the U.S., the people who are rating search results and YouTube results through the platform Appin, have been protesting their conditions, saying that, you know, they're underpaid.
That they don't have health benefits.
Raters are why Google search results are so good.
They make sure that people like you and me get the information we need every single time.
And no one working for Google should be struggling to pay their rent.
Google's defense has been that they are paid fairly, but there tends to be in the industry not a lot of attention on this kind of work.
Part of it, I think, stems from the sense that it won't be needed for long.

★ここまでの要約・日本語訳★

  • Participants in AI training don't usually know which AI they're training or for what purpose. The platform and projects have cryptic names to maintain layers of anonymity in the system.
  • The process involves gathering enormous amounts of data, where workers, known as data annotators, go through the data and tag it appropriately. This data is critical for AI to function properly, especially for autonomous vehicles.
  • The majority of these data annotators are based in countries like India, the Philippines, Kenya, and Venezuela, and are usually paid low wages.
  • The need for data annotators has increased over the years due to the continuous advancement in AI technologies.
  • Questions surrounding the future of AI and human collaboration remain, with concerns ranging from the accuracy of AI to the treatment of worker conditions in data annotation.
  • AIトレーニングの参加者は通常、どのAIを訓練しているのか、またそれが何のためのものなのかを知らない。プラットフォームやプロジェクトは、システム内に匿名性の層を保持するために暗号化された名前がつけられています。
  • このプロセスでは、大量のデータを収集し、データアノテータと呼ばれる労働者がデータを適切にタグ付けします。このデータは、特に自動運転車などのAIが適切に機能するためには不可欠です。
  • これらのデータアノテータの大部分は、インド、フィリピン、ケニアベネズエラなどの国に基づいており、通常は低賃金で働いています。
  • AI技術の連続的な進歩に伴い、データアノテータの必要性は年々増加しています。
  • AIと人間の協働の未来についての疑問が残っており、AIの正確さからデータアノテーションにおける労働者の待遇に至るまでの懸念があります。

★ここまでの特徴的な固有名詞・英単語・英語表現★
【固有名詞】

  • RemoTasks(レモタスク)
  • Amazon Mechanical Turk Platform(アマゾン・メカニカル・ターク・プラットフォーム)
  • US dollars(USドル)
  • India(インド)
  • Philippines(フィリピン)
  • Kenya(ケニア
  • Venezuela(ベネズエラ
  • Africa(アフリカ)
  • ChatGPT(チャットジーピーティ)
  • BRANDON BURGERSTOMP(ブランドン・バーガーストンプ)
  • Google(グーグル)
  • Arizona(アリゾナ)
  • Appin(アピン)

【英単語】

  • anonymity(匿名性)
  • cryptic(秘密の)
  • codenames(コードネーム)
  • hustle(奮闘)
  • curate(キュレーション)
  • computational(計算の)
  • annotate(注釈をつける)
  • brittle(壊れやすい)
  • consume(消費する)

【コロケーション】

  • training AI(AIを訓練する)
  • pedestrian recognition(歩行者認識)
  • vehicle recognition(車両認識)
  • have access to(〜を手に入れる)
  • make sure(確認する)
  • need someone(誰かが必要)
  • turning points(転換点)
  • label data set(データセットをラベル付ける)
  • become possible(可能になる)
  • start going overseas(海外に行き始める)
  • dry up(枯渇する)
  • bullshitters(ごまかしの)
  • fact-check(事実確認)
  • training data(訓練データ)
  • human input(人間からの入力)

But, you know, the AI will get good enough, that you don't need annotators anymore.

[19:49]

And so, it's not really a job, so much as just, like, some temporary work that you're calling on someone to do. And what happens after that is not really your concern. And so, I think there's a sense where companies just don't even really think of it as a labor issue. That they're just kind of buying a bunch of data. That may be changing. I've seen in papers people say these annotators are paid the median wage wherever they're based, or things like that.
I think there is – when attention is brought to this situation, there often is a push to do better, but it's pretty uneven, and there's just not a lot of transparency in the data pipeline. Even if you want to do better, it's hard.

You know what it sounds like, Josh? It sounds like it might just be easier to pay people to do jobs. Did that occur to you at any point while you were clicking through whatever data that you were annotating? That did occur to me many times while I was annotating.

[20:50]

There's one situation where it was quite acute, where I was tracing pallets in a warehouse for some self-driving forklift. Just the amount of really excruciatingly detailed labor that was going into figuring out how to drive a forklift around to automate, you know, one job, a forklift driver. It was pretty staggering. I mean, there must have been hundreds if not thousands of people working on this thing around the world, just tracing pixel by pixel each pallet and each pallet hole in these dark warehouses.

I guess the hope of these companies is that once you've done all that work, you have this thing that can do it forever. But I don't know that that's true because, you know, the world keeps changing and throwing up new edge cases. And somewhere in this world, that used to be a good union job. Right. Exactly.

What were you hoping people would take away from your piece? What were you're hoping people would learn by going inside this AI factory?

[21:51]

I think there are a couple of different things and a couple of different reasons why it's important to look at this work. I mean, the first is just kind of the labor issues that it raises. You have these potentially extremely profitable technologies that rely on often low paid in labor around the world that is often not discussed.

The second thing that I wasn't expecting to find, but found, is that the work itself is structurally precarious in a way that a lot of even gig work is notoriously precarious. But the way AI development works, where you need a ton of data to train your model and then you need a bit of more specific data to fill in some edge case, and then nothing for a while, and then a ton more data, means that this is going to be a fixture in an AI economy. There's going to be a lot of time people are not working and there's going to be times when lots and lots of people need to work, and the way it's set up right now, the workers pay the cost of that.

[22:51]

They're the ones who are unemployed whenever they're not needed, and then they're expected to be on-demand when they are needed.

I also think there should be a better understanding of the way these systems work. I think it's easy to, especially with something like ChatGBT, when it can tell you that it's an AI trained by OpenAI using reinforcement learning and all about itself, that it acts in very human-like ways. There's a tendency to think it can reason like a human. But it's important to think about the fact that a lot of that stuff was written there manually by humans and then reinforced by humans. There's a sense in which seeing the humans in the system kind of makes you realize how inhuman these machines are and that they have some pretty glaring weaknesses.

I don't trust him, Josh. I think that's wise for the time being.

[23:52]

Josh Jezza does investigations at The Verge. You can read his work at theverge.com. His piece that inspired our episode today was titled "Inside The AI Factory," and it also ran on the cover of a recent issue of New York magazine. Today's show was produced by Amanda Llewellyn, it was edited by Amina Al Saadi, and fact-checked by Laura Bullard. It was engineered by Patrick Boyd. I'm Sean Ramaswaram, and this is Today Explained.

Goodbye.

★ここまでの要約・日本語訳★

  • AI progress will eventually lead to phasing out annotators, with companies mostly seeing their tasks as temporary and not a labor issue, mostly focusing on acquiring data. Some companies have started balancing this out by paying annotators median wages, but overall transparency in the data pipeline is lacking.
  • 例外的な進歩AIは最終的にはアノテーターの段階的な廃止につながり、企業は主に彼らのタスクを一時的なものとして見ており、労働問題としては見ない。データの取得を主に焦点に置いています。 一部の企業はアノテーターに中央値の賃金を支払うことでこれをバランスしていますが、全体的にはデータパイプラインの透明性が不足しています。

  • There is a stark comparison between the strenuous efforts put into automating one job, like a forklift driver, through AI and the ease with which the same job could potentially be done by a person. This raises questions about the feasibility and ethics of replacing human labor with AI.
  • AIを通じて一つの仕事、例えばフォークリフト運転手を自動化するために費やされる厳しい努力と、その同じ仕事が可能性としては人間によって簡単に行われることができるという事実との間には鮮やかな対比があります。これは、人間の労働をAIに置き換えることの実現可能性と倫理についての疑問を提起します。
  • The author wanted to highlight the labor issues arising from AI development and its dependency on low-paid gig workers. This work is precarious with an erratic workflow and workers pay the cost of unemployment when they're not needed.
  • 著者は、AI開発から生じる労働問題と、低賃金のギグワーカーへの依存性を強調したかった。 この仕事は不安定で、作業フローが不規則であり、必要とされないときには労働者が失業のコストを支払います。
  • Understandings of AI often neglect the human labor involved in their development. Recognizing this human input is crucial in evaluating AI's capabilities, as these systems cannot truly mimic human reasoning.
  • AIの理解はよくその開発に関与する人間の労働を無視します。 これらのシステムは真の人間の推論を模倣することはできないので、AIの能力を評価する際には、この人間の入力を認識することが重要です。
  • The show featuring Josh Jezza's investigation on AI was produced by Amanda Llewellyn, edited by Amina Al Saadi, and fact-checked by Laura Bullard. It was engineered by Patrick Boyd, and hosted by Sean Ramaswaram. Jezza's piece titled "Inside The AI Factory," was also published in New York magazine.
  • Josh JezzaのAIに関する調査をフィーチャーしたショーは、Amanda Llewellynが製作し、Amina Al Saadiが編集し、Laura Bullardが事実確認を行いました。 Patrick Boydがエンジニアリングを行い、Sean Ramaswaramが司会を務めました。 Jezzaの「Inside The AI Factory」という題の作品は、New York magazineでも公開されました。

★ここまでの特徴的な固有名詞・英単語・英語表現★
【固有名詞】

  • AI (人工知能)
  • Josh (ジョシュ)
  • The Verge (The Verge)
  • OpenAI (オープンAI)
  • ChatGBT(ChatGBT)
  • New York magazine (ニューヨークマガジン)
  • Today Explained (今日の解説)

【英単語】

  • annotator(アノテータ)
  • temporary(一時的な)
  • transparency(透明性)
  • pipeline(パイプライン)
  • pallets(パレット)
  • warehouse(倉庫)
  • forklift(フォークリフト
  • profitable(利益をもたらす)
  • notorious(悪名高い)
  • manual(手動の)

【コロケーション】

  • call on(訪問する)
  • do better(改善する)
  • click through(クリックして通過する)
  • figure out(理解する)
  • take away(持ち帰る)
  • raise issue(問題を提起する)
  • work on(取り組む)
  • pay the cost(費用を負担する)
  • in demand(需要がある)
  • trust in(信頼する)