NTT & Yomiuri: ‘Social Order Could Collapse’ in AI Era

From the Wall Street Journal:

Japan’s largest telecommunications company and the country’s biggest newspaper called for speedy legislation to restrain generative artificial intelligence, saying democracy and social order could collapse if AI is left unchecked.

Nippon Telegraph and Telephone, or NTT, and Yomiuri Shimbun Group Holdings made the proposal in an AI manifesto to be released Monday. Combined with a law passed in March by the European Parliament restricting some uses of AI, the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.

The Japanese companies’ manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology. Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users’ attention without regard to morals or accuracy.

Unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars,” the manifesto said.

It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.

global push is under way to regulate AI, with the European Union at the forefront. The EU’s new law calls on makers of the most powerful AI models to put them through safety evaluations and notify regulators of serious incidents. It also is set to ban the use of emotion-recognition AI in schools and workplaces.

The Biden administration is also stepping up oversight, invoking emergency federal powers last October to compel major AI companies to notify the government when developing systems that pose a serious risk to national security. The U.S., U.K. and Japan have each set up government-led AI safety institutes to help develop AI guidelines.

Still, governments of democratic nations are struggling to figure out how to regulate AI-powered speech, such as social-media activity, given constitutional and other protections for free speech.

NTT and Yomiuri said their manifesto was motivated by concern over public discourse. The two companies are among Japan’s most influential in policy. The government still owns about one-third of NTT, formerly the state-controlled phone monopoly.

Yomiuri Shimbun, which has a morning circulation of about six million copies according to industry figures, is Japan’s most widely-read newspaper. Under the late Prime Minister Shinzo Abe and his successors, the newspaper’s conservative editorial line has been influential in pushing the ruling Liberal Democratic Party to expand military spending and deepen the nation’s alliance with the U.S.

The two companies said their executives have been examining the impact of generative AI since last year in a study group guided by Keio University researchers.

The Yomiuri’s news pages and editorials frequently highlight concerns about artificial intelligence. An editorial in December, noting the rush of new AI products coming from U.S. tech companies, said “AI models could teach people how to make weapons or spread discriminatory ideas.” It cited risks from sophisticated fake videos purporting to show politicians speaking.

NTT is active in AI research, and its units offer generative AI products to business customers. In March, it started offering these customers a large-language model it calls “tsuzumi” which is akin to OpenAI’s ChatGPT but is designed to use less computing power and work better in Japanese-language contexts.

An NTT spokesman said the company works with U.S. tech giants and believes generative AI has valuable uses, but he said the company believes the technology has particular risks if it is used maliciously to manipulate public opinion.

…………………………………………………………………………………………………………….

From the Japan News (Yomiuri Shimbun):

Challenges: Humans cannot fully control Generative AI technology

・ While the accuracy of results cannot be fully guaranteed, it is easy for people to use the technology and understand its output. This often leads to situations in which generative AI “lies with confidence” and people are “easily fooled.”

・ Challenges include hallucinations, bias and toxicity, retraining through input data, infringement of rights through data scraping and the difficulty of judging created products.

・ Journalism, research in academia and other sources have provided accurate and valuable information by thoroughly examining what information is correct, allowing them to receive some form of compensation or reward. Such incentives for providing and distributing information have ensured authenticity and trustworthiness may collapse.

A need to respond: Generative AI must be controlled both technologically and legally

・ If generative AI is allowed to go unchecked, trust in society as a whole may be damaged as people grow distrustful of one another and incentives are lost for guaranteeing authenticity and trustworthiness. There is a concern that, in the worst-case scenario, democracy and social order could collapse, resulting in wars.

・ Meanwhile, AI technology itself is already indispensable to society. If AI technology is dismissed as a whole as untrustworthy due to out-of-control generative AI, humanity’s productivity may decline.

・ Based on the points laid out in the following sections, measures must be realized to balance the control and use of generative AI from both technological and institutional perspectives, and to make the technology a suitable tool for society.

Point 1: Confronting the out-of-control relationship between AI and the attention economy

・ Any computer’s basic structure, or architecture, including that of generative AI, positions the individual as the basic unit of user. However, due to computers’ tendency to be overly conscious of individuals, there are such problems as unsound information spaces and damage to individual dignity due to the rise of the attention economy.

・ There are concerns that the unstable nature of generative AI is likely to amplify the above-mentioned problems further. In other words, it cannot be denied that there is a risk of worsening social unrest due to a combination of AI and the attention economy, with the attention economy accelerated by generative AI. To understand such issues properly, it is important to review our views on humanity and society and critically consider what form desirable technology should take.

・ Meanwhile, the out-of-control relationship between AI and the attention economy has already damaged autonomy and dignity, which are essential values that allow individuals in our society to be free. These values must be restored quickly. In doing so, autonomous liberty should not be abandoned, but rather an optimal solution should be sought based on human liberty and dignity, verifying their rationality. In the process, concepts such as information health are expected to be established.

Point 2: Legal restraints to ensure discussion spaces to protect liberty and dignity, the introduction of technology to cope with related issues

・ Ensuring spaces for discussion in which human liberty and dignity are maintained has not only superficial economic value, but also a special value in terms of supporting social stability. The out-of-control relationship between AI and the attention economy is a threat to these values. If generative AI develops further and is left unchecked like it is currently, there is no denying that the distribution of malicious information could drive out good things and cause social unrest.

・ If we continue to be unable to sufficiently regulate generative AI — or if we at least allow the unconditional application of such technology to elections and security — it could cause enormous and irreversible damage as the effects of the technology will not be controllable in society. This implies a need for rigid restrictions by law (hard laws that are enforceable) on the usage of generative AI in these areas.

・ In the area of education, especially compulsory education for those age groups in which students’ ability to make appropriate decisions has not fully matured, careful measures should be taken after considering both the advantages and disadvantages of AI usage.

・ The protection of intellectual property rights — especially copyrights — should be adapted to the times in both institutional and technological aspects to maintain incentives for providing and distributing sound information. In doing so, the protections should be made enforceable in practice, without excessive restrictions to developing and using generative AI.

・ These solutions cannot be maintained by laws alone, but rather, they also require measures such as Originator Profile (OP), which is secured by technology.

Point 2: Legal restraints to ensure discussion spaces to protect liberty and dignity, and the introduction of technology to cope with related issues

・ Ensuring spaces for discussion in which human liberty and dignity are maintained has not only superficial economic value, but also a special value in terms of supporting social stability. The out-of-control relationship between AI and the attention economy is a threat to these values. If generative AI develops further and is left unchecked like it is currently, there is no denying that the distribution of malicious information could drive out good things and cause social unrest.

・ If we continue to be unable to sufficiently regulate generative AI — or if we at least allow the unconditional application of such technology to elections and security — it could cause enormous and irreversible damage as the effects of the technology will not be controllable in society. This implies a need for rigid restrictions by law (hard laws that are enforceable) on the usage of generative AI in these areas.

・ In the area of education, especially compulsory education for those age groups in which students’ ability to make appropriate decisions has not fully matured, careful measures should be taken after considering both the advantages and disadvantages of AI usage.

・ The protection of intellectual property rights — especially copyrights — should be adapted to the times in both institutional and technological aspects to maintain incentives for providing and distributing sound information. In doing so, the protections should be made enforceable in practice, without excessive restrictions to developing and using generative AI.

・ These solutions cannot be maintained by laws alone, but rather, they also require measures such as Originator Profile (OP), which is secured by technology.

Point 3: Establishment of effective governance, including legislation

・ The European Union has been developing data-related laws such as the General Data Protection Regulation, the Digital Services Act and the Digital Markets Act. It has been developing regulations through strategic laws with awareness of the need to both control and promote AI, positioning the Artificial Intelligence Act as part of such efforts.

・ Japan does not have such a strategic and systematic data policy. It is expected to require a long time and involve many obstacles to develop such a policy. Therefore, in the long term, it is necessary to develop a robust, strategic and systematic data policy and, in the short term, individual regulations and effective measures aimed at dealing with AI and attention economy-related problems in the era of generative AI.

・However, it would be difficult to immediately introduce legislation, including individual regulations, for such issues. Without excluding consideration of future legislation, the handling of AI must be strengthened by soft laws — both for data (basic) and generative AI (applied) — that offer a co-regulatory approach that identifies stakeholders. Given the speed of technological innovation and the complexity of value chains, it is expected that an agile framework such as agile governance, rather than governance based on static structures, will be introduced.

・ In risk areas that require special caution (see Point 2), hard laws should be introduced without hesitation.

・ In designing a system, attention should be paid to how effectively it protects the people’s liberty and dignity, as well as to national interests such as industry, based on the impact on Japan of extraterritorial enforcement to the required extent and other countries’ systems.

・ As a possible measure to balance AI use and regulation, a framework should be considered in which the businesses that interact directly with users in the value chain, the middle B in “B2B2X,” where X is the user, reduce and absorb risks when generative AI is used.

・ To create an environment that ensures discussion spaces in which human liberty and dignity are maintained, it is necessary to ensure that there are multiple AIs of various kinds and of equal rank, that they keep each other in check, and that users can refer to them autonomously, so that users do not have to depend on a specific AI. Such moves should be promoted from both institutional and technological perspectives.

Outlook for the Future:

・ Generative AI is a technology that cannot be fully controlled by humanity. However, it is set to enter an innovation phase (changes accompanying social diffusion).

・ In particular, measures to ensure a healthy space for discussion, which constitutes the basis of human and social security (democratic order), must be taken immediately. Legislation (hard laws) are needed, mainly for creating zones of generative AI use (strong restrictions for elections and security).

・ In addition, from the viewpoint of ecosystem maintenance (including the dissemination of personal information), it is necessary to consider optimizing copyright law in line with the times, in a manner compatible with using generative AI itself, from both institutional and technological perspectives.

・ However, as it takes time to revise the law, the following steps must be taken: the introduction of rules and joint regulations mainly by the media and various industries, the establishment and dissemination of effective technologies, and making efforts to revise the law.

・ In this process, the most important thing is to protect the dignity and liberty of individuals in order to achieve individual autonomy. Those involved will study the situation, taking into account critical assessments based on the value of community.

References:

https://www.wsj.com/tech/ai/social-order-could-collapse-in-ai-era-two-top-japan-companies-say-1a71cc1d

‘Joint Proposal on Shaping Generative AI’ by The Yomiuri Shimbun Holdings and NTT Corp.

Major technology companies form AI-Enabled Information and Communication Technology (ICT) Workforce Consortium

MTN Consulting: Generative AI hype grips telecom industry; telco CAPEX decreases while vendor revenue plummets

Cloud Service Providers struggle with Generative AI; Users face vendor lock-in; “The hype is here, the revenue is not”

Amdocs and NVIDIA to Accelerate Adoption of Generative AI for $1.7 Trillion Telecom Industry

 

 

 

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*