The National Information Technology Development Agency (NITDA), revealed it has begun the drafting of Nigeria’s code of practice for Artificial Intelligence (AI) tools such as ChatGPT and others.
A spokesperson at the agency Mrs. Hadiza Umar stated that while Nigeria has a policy for AI, it is yet to have a code of practice which is necessary.
In her words,
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
“On AI, as I mentioned earlier, we have drafted the National Artificial Intelligence Policy, which has not yet been approved. Also, the agency is currently developing the Nigerian code of practice for AI. Nigeria cannot adopt the EU and US codes of conduct due to our peculiar situation. But we can leverage on theirs to perfect ours to suit our situation”.
According to Umar, drafting Nigeria’s code of practice for AI is very important to ensure responsible and ethical deployment, which she believes will mitigate the growing risk around the technology.
These AI policies seek to address issues reported on generative AI tools, such as fake news, transparency issues, lack of data privacy, bias, accountability concerns, and several others.
In the face of the potential risks of AI technologies like ChatGPT, it is more important than ever to become informed, as the technology evolves, so must policy to regulate it.
The NITDA understands this and therefore seeks to draft policies to prevent the spread of fake news, abuse, and so on, which is synonymous with AI tools. The policy will also mandate that the NITDA collaborates with AI Developers and policymakers.
Granted that the pros of these AI tools possibly outweigh their cons, however, the negative perspectives of these tools must be firmly controlled to avert moral, cultural, and ethical issues.
With AI technologies integrated into so many products lately, which is influencing everything from customer service to predictive analytics, nations are grappling with creating effective regulations that balance fostering innovation to ensure public safety.
The release of ChatGPT last November marked an unprecedented milestone in the development of AI. This has spurred the rollout of several other chatbots from several tech companies.
Due to the AI wave sweeping across the continent, governments around the world are considering new AI regulations to tackle the potential dangers of next-generation AI tools like ChatGPT.
For example, Italy became the first Western country to take action against ChatGPT at the end of March this year. The AI chatbot was temporarily banned over potential violations of the EU Data Protection Regulation. However, in April, the Italian government restored access to ChatGPT in the country, stating that the technology implemented changes to satisfy Italian regulators.
Also, Legislators around the world are beginning to take action with ambitious Al regulations. European lawmakers came a step closer to passing new rules regulating artificial intelligence tools such as ChatGPT, following a crunch vote where they agreed on tougher draft legislation.
EU legislators considered outright banning the use of copyrighted material in Al models but instead agreed on a transparency requirement, which has been praised as a compromise that regulates Al without stifling innovation.
The decision by the EU is a positive step taken in regulating AI and a clear signal from the union that the safeguarding of fundamental human rights should be the cornerstone.
Lately, it has become pertinent for governments of nations and other relevant bodies to keep an eye on these technological developments, as AI technology if not properly regulated can pose a serious risk to society.