🏥⚖️ Imagine AI detecting cancer in an X-ray or drafting a legal contract for your business. In many countries, these are already becoming reality. But in Japan, regulations written long before AI existed are standing in the way — requiring "two or more doctors" for cancer screenings and making AI-generated legal documents potentially illegal. Now, the Japanese government is taking an unusual step: asking the public to report which regulations are holding AI back.

Government Asks Citizens to Report AI Barriers

On February 10, 2026, Japan's Cabinet Office — through its Regulatory Reform Promotion Office and AI Strategy Headquarters — launched an unprecedented public solicitation. The government is asking citizens, businesses, and organizations to report specific regulations and systems that hinder AI adoption across all sectors.

The submission window runs through March 10, and the collected cases will feed into discussions at the Regulatory Reform Promotion Council. The findings are expected to shape the "Regulatory Reform Implementation Plan" to be finalized this summer.

This initiative stems from the "Artificial Intelligence Basic Plan" approved by the Cabinet on December 23, 2025. The plan explicitly states that promoting AI adoption — and reforming outdated regulations to enable it — is essential for regional revitalization, economic growth, and improving citizens' quality of life. The government's stance is clear: "not using AI" is the greatest risk Japan faces.

Healthcare: Can AI Replace One of Two Required Doctors?

Japan's municipal cancer screening system currently requires what's called "double reading" — two or more doctors must independently examine X-ray images to prevent missed diagnoses. While this system exists for good reason, it's creating serious strain as Japan faces an acute shortage of medical professionals.

Radiologists make up only about 2.2% of all doctors in Japan. In rural areas, the shortage is even more severe. Secondary reading typically requires doctors to travel to a separate facility after their regular clinic hours, adding significant physical and mental burden.

AI medical devices are rapidly emerging as a solution. Several AI diagnostic support systems have already received regulatory approval in Japan, including endoscopy image analysis AI and chest X-ray nodule detection AI. In the 2024 medical fee revision, a new reimbursement of 60 points (roughly $4) was introduced for surgeries assisted by AI lesion detection programs.

The Regulatory Reform Promotion Council is now exploring whether one of the two doctors in the double-reading requirement could be replaced by AI — shifting from "two doctors" to "one doctor plus AI." Formal discussions were scheduled to begin on February 12.

Of course, caution remains. AI relies on training data and may struggle with rare conditions or individual anatomical variations. There's also the risk of "automation bias" — doctors potentially becoming less vigilant when they know AI has already screened the images.

Legal Field: Is AI Contract Drafting Illegal in Japan?

The legal sector faces an even more fundamental challenge. Article 72 of Japan's Attorney Act prohibits anyone without a lawyer's license from handling legal affairs for compensation — a prohibition known as "hi-ben kōi" (unauthorized practice of law). Violations can result in up to two years' imprisonment or fines up to 3 million yen (approximately $20,000).

As AI becomes increasingly capable of drafting and reviewing contracts — even tailoring them to specific circumstances — the question of whether such services constitute unauthorized legal practice has become a major roadblock for the legal technology industry.

In August 2023, Japan's Ministry of Justice issued guidelines outlining three key criteria for determining whether AI-based contract services violate the law:

  • Compensation purpose: Free services generally don't qualify as unauthorized practice
  • Case specificity: Routine contracts without active legal disputes are less likely to violate the law
  • Nature of legal services: Providing general templates (rather than case-specific legal analysis) is less likely to be problematic

However, the guidelines explicitly state they represent only "general principles," with final judgment left to the courts. This ambiguity continues to make businesses hesitant to adopt AI legal tools. Some companies that previously offered paid AI contract review services switched to free models specifically to avoid legal risk.

The Regulatory Reform Council is now examining how far AI-assisted contract creation should be permitted, including possible clarification or amendment of the Attorney Act.

Japan's Position in the Global AI Regulation Landscape

Japan's approach to AI policy occupies a distinctive middle ground internationally. The EU enacted its "EU AI Act" in 2024, implementing a risk-based regulatory framework with strict requirements for high-risk AI applications. The United States, under the Trump administration, has rolled back Biden-era AI executive orders, favoring a deregulatory, innovation-first approach.

Japan chose a third path. The AI Promotion Act (officially: the Act on Promotion of Research, Development and Utilization of AI-Related Technologies), promulgated in June 2025, avoids EU-style hard regulation. Instead, it relies primarily on guidelines and soft law, supplemented by minimal legislative frameworks. An AI Strategy Headquarters was established within the Cabinet, overseeing a Basic AI Plan that is reviewed annually for flexibility.

The current public solicitation exemplifies Japan's "bottom-up" approach to regulatory reform — rather than imposing changes from above, the government is listening to those on the ground who encounter regulatory barriers firsthand.

What's Next: The Summer Implementation Plan

Looking ahead, after the March 10 submission deadline, collected cases will be analyzed and discussed across working groups within the Regulatory Reform Promotion Council. The resulting "Regulatory Reform Implementation Plan," expected this summer, will be formalized through Cabinet approval.

Key focal points include establishing conditions for AI-assisted medical image reading and clarifying the Attorney Act's application to AI legal services. Additionally, the public solicitation may uncover regulatory barriers in sectors that haven't yet received attention — education, finance, construction, agriculture, and beyond.

Japan has entered an era of serious population decline. The "2025 Problem" — when all baby boomers turn 75 or older — has intensified the healthcare workforce crisis. Without AI assistance, maintaining basic social infrastructure will become increasingly difficult.

This citizen-driven approach to identifying and removing regulatory barriers could mark a turning point in Japan's AI adoption journey. The question is whether the pace of reform can match the speed of technological change.


In Japan, healthcare and legal regulations are being reconsidered to make room for AI. What regulatory barriers to AI adoption exist in your country? We'd love to hear your perspective!

References

Reactions in Japan

I wish people understood rural screening reality. Two of us handle hundreds of secondary readings monthly — we're at our limit. If AI can take one side, we maintain quality without burning out. AI doesn't get fatigued or miss things from exhaustion.

I agree 0
I disagree 0

The unauthorized practice barrier is brutal. Our service can technically do so much more, but we limit features just because 'it might violate the Attorney Act.' Competitors overseas operate freely while we're handcuffed.

I agree 0
I disagree 0

Public solicitation sounds nice, but it'll probably end with 'we'll consider it.' The name 'Regulatory Reform Council' overpromises. I won't believe it until they set concrete deadlines for specific changes.

I agree 0
I disagree 0

I support AI-assisted reading, but who's liable when misdiagnosis happens? The doctor who makes the final call? But what about cases AI misses? I don't see this discussion happening at all.

I agree 0
I disagree 0

From the screening frontline: reading doctors' fatigue is genuinely serious. Reading images until late night then seeing patients the next morning is routine. If AI lets doctors rest even a little, it benefits patients too.

I agree 0
I disagree 0

Bar associations will fight this tooth and nail. If AI contract review becomes legal, lawyers lose work. 'Regulatory reform' is really just a battle against vested interests.

I agree 0
I disagree 0

I understand the direction, but medical AI training data has biases. We must verify if models are trained with sufficient data reflecting Japanese body types and cancer characteristics. Rushing into accidents defeats the purpose.

I agree 0
I disagree 0

As a freelancer, AI contract checking would be a lifesaver. Lawyers charge tens of thousands of yen per case — painful for freelancers. But checking everything myself is scary. I want affordable AI checking services.

I agree 0
I disagree 0

As a lawyer, the AI contract review issue isn't about technology — it's about liability. When AI misses a problematic clause and the client suffers losses, who compensates? You can't deregulate without settling this.

I agree 0
I disagree 0

Our town has few hospitals that even offer screenings. Qualified reading doctors are even scarcer. If AI maintains screening quality, we need this ASAP to protect residents' lives. Losing lives because of regulations is backwards.

I agree 0
I disagree 0

Having spent 2+ years on regulatory approval, I welcome reform discussions. But what we need isn't 'we'll consider it' — it's concrete timelines and transition measures. For startups, it's a race between running out of funding and getting approval.

I agree 0
I disagree 0

As a breast cancer survivor found through screening, reading accuracy is literally life or death. If AI reduces missed cases, I'm all for it. But please never let 'AI is handling it' become an excuse for human complacency.

I agree 0
I disagree 0

Soliciting barrier cases basically means the government is admitting they don't know where the problems are. They created the Digital Agency and still can't grasp what's happening on the ground. That itself is the problem.

I agree 0
I disagree 0

We review 100+ contracts monthly. I'd love to delegate routine NDAs to AI. But when told there's 'unauthorized practice risk,' we can't adopt it. The MOJ guideline saying 'courts make the final call' is the most frustrating ambiguity.

I agree 0
I disagree 0

Japan is said to lag in AI development, but our regulatory reform approach is better than EU's over-regulation. Bottom-up collection of barrier cases is actually quite rational. The real question is whether they'll actually act on what they collect.

I agree 0
I disagree 0

What's missing from medical AI discussions is insurance reimbursement. If using AI doesn't increase hospital revenue, who'll adopt it? Economic incentives for AI use must be designed alongside deregulation.

I agree 0
I disagree 0

Voices from Around the World

Dr. Sarah Mitchell

NHS already has AI diagnostic imaging in pilot across multiple hospitals. Surprised Japan is still stuck on the 'two doctors required' rule. That said, our ethics review for AI deployment took over a year too — regulatory reform speed is a challenge everywhere.

Marcus Johansson

In Sweden, patient data consent is the biggest hurdle for medical AI. GDPR's strictness actually makes collecting AI training data difficult. Japan's approach of identifying specific regulations to fix might actually be more efficient.

James Cooper

The FDA already has an established SaMD approval pathway for AI medical devices. Japan's discussion feels a bit slow, but the idea of collecting frontline voices through public solicitation is unique. In the US, lobbyists drive regulatory change, so a system where citizens' voices directly shape policy is actually enviable.

Li Wei

In China, AI diagnostic tools are making a huge difference in reducing rural healthcare gaps. Strong government push means fast deployment. Japan may be too cautious, but their safety assurance approach might be worth learning from. Both countries have good aspects to adopt.

Priya Sharma

In India, lawyer fees are so high that many SMEs simply ignore legal risks. AI contract checking could solve legal access problems in developing countries overnight. Japan's unauthorized practice debate is relevant for many other countries too.

Anna Schmidt

As a German, I can say the EU AI Act is significantly slowing medical AI adoption. Medical AI falls under 'high-risk AI' classification, creating double and triple approval processes. Japan was wise to avoid the EU approach.

Carlos Mendez

Mexico has virtually no regulations for medical AI. No regulation seems free, but in reality hospitals won't adopt it without quality assurance. Japan's 'organize and appropriately relax' approach will likely drive better adoption long-term.

Rachel Nguyen

Australia's TGA is also building an AI medical device approval framework, facing the same 'existing laws didn't anticipate AI' problem as Japan. Collecting barrier cases from citizens — our government should do this too.

Tomáš Novák

Our small Czech law firm tried adopting AI contract review but gave up due to EU compliance costs. Does Japan have the same structure where regulations squeeze out small players? I don't want a world where only big companies can use AI.

Fatima Al-Hassan

UAE has an AI Minister accelerating AI at national level, with a fast-track system for foreign AI medical device approval. Japan's public solicitation is democratic, but sometimes top-down speed is necessary too.

Kim Soo-jin

Korea recently relaxed some regulations on AI-powered telemedicine. Surprised to hear Japan hasn't even fully opened up telemedicine yet. Doctor shortages plus aging is a shared Korean-Japanese challenge. We can learn a lot from each other's examples.

Oluwaseun Adeyemi

In Nigeria, the doctor-to-patient ratio is dozens of times worse than Japan's. AI diagnostic imaging is literally needed at the 'better than nothing' level. Japan's 'AI or doctor' luxury debate sounds like another world from here.

Jean-Pierre Dubois

In France, the notary system complicates contract-related regulations. How far AI can replace notarial duties is exactly like Japan's unauthorized practice debate. Latin-system legal countries all seem to hit the same wall.

Maria Rossi

Italian healthcare also lags in AI adoption, but here it's more about digital literacy than regulation. No matter how much you deregulate, it's meaningless if healthcare workers can't trust and use AI. I wonder how Japan handles that side.

Daniel Park

In Canada, healthcare regulations differ by province, so AI medical device adoption is fragmented across provinces. Japan's ability to pursue regulatory reform nationally is a strength. For federal countries, having central government take the lead is a huge advantage.

Aisha Bello

Singapore uses a regulatory sandbox for AI, allowing pilot AI medical services under specific conditions. Rather than nationwide rollout, Japan might be more realistic starting with special zones. Create experimentation spaces alongside the public solicitation.