GTP Chat : The much talked about Artificial Intelligence and all you need to know!

It is now accepted, in case there is any doubt, that technological innovations are on a meteoric rise and are rocking the world in the 21st century. This has not only been true in the medical field, with innovative technologies and vaccines; or in the aerospace field, with ever more impressive explorations. This evolution has also materialised in many sectors of activity, notably thanks to artificial intelligence in general and the very popular ChatGPT in particular.

The European Parliament defines artificial intelligence as any tool used by a machine to “reproduce human-related behaviours, such as reasoning, planning and creativity”. The European Council defines it as « a system designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and  inputs, infers how to achieve a given set of human defined objectives using machine learning and/or logic-and knowledge-based approaches, and produces system-generated outputs such as content, predictions, recommendations or decisions, influencing the environments they interact with».

More and more new technologies operate on the basis of this artificial intelligence system, which leads to its almost “ubiquitous” presence in the daily lives of consumers of digital products and services. This rise in power of AI is also what undoubtedly justifies this interest at European level, manifested through these various regulatory actions, among others:

  • The publication of the European Strategy on Artificial Intelligence in April 2018
  • The establishment of a High Level Expert Group on Artificial Intelligence (AI HLEG)
  • The elaboration of guidelines for the development of trustworthy AI (with the definition of seven ethical rules of an AI)
  • The production of a European Commission White Paper on AI, published in 2020, setting out a clear vision for AI in Europe: “an ecosystem of excellence, trust and reliability”.
  • The proposed Regulation of the European Parliament and of the Council, establishing harmonised rules on artificial intelligence, published on 21 April 2021.
  • On a smaller scale, for example in France, the Commission National de l’Informatique et des Libertés (CNIL) announced on 23 January 2023, the establishment of an artificial intelligence department to strengthen its expertise on these systems and its understanding of the risks to privacy while preparing for the entry into force of the European regulation on AI.

In Africa, Benin has adopted a national strategy for artificial intelligence and megadata. Indeed, the Government of Benin, through its action programme, has made digital technology one of the foundations of economic and social progress. The significant investments made in this sector since 2016 testify to a great political will to develop the digital economy and transform the country into a regional platform for digital services in a sustainable manner.
With a projected amount of four billion six hundred and eighty million (4,680,000,000) CFA francs over a period of five (05) years, the implementation of this strategy offers the opportunity to exploit AI in the target areas of development in order to position the country as a major player in AI in West Africa.

AI is therefore unanimously the “attraction of the century”, both for public authorities and for private organisations such as OpenAI and Google. Indeed, #DeepMind, the division in charge of artificial intelligence at Google, has announced the release of its Google bot #Sparrow (currently in trial and in private version).

This conversational bot, which is identical to ChatGPT, is more powerful and more credible than the latter. Unlike ChatGPT, Sparrow will have :

  • The ability to cite sources of information;
  • Unlimited access to the Internet, which will allow them to incorporate up-to-date information into their responses.

 

The link between ChatGPT and artificial intelligence

Open AI, the company behind ChatGPT, describes itself as a “capped for-profit” company, specialising in artificial reasoning and based in San Francisco.

Just to clarify: OpenAI is defined as capped for-profit because, at its founding in December 2015, it was intended to operate as a private, non-profit artificial intelligence research association or centre committed to making its research results public and accessible to all. It was not until March 2019 that the company decided to monetise some of the results of its work (profit-making purpose capped) to “attract capital to ensure the sustainability of the research”.

Being specialised in artificial intelligence, it is quite natural that OpenAI developed and made public the first version of ChatGPT, a conversational assistant based on artificial intelligence, on 30 November 2022.

The acronym ChatGPT stands for “Generative Pre-trained Transformer“, as it has been built and trained by human trainers, hence supervised learning; fed with data collected via the internet and through the use of our “first curious”, hence reinforcement learning.

The learning process of ChatGPT is still ongoing, because the first version made available to the public would allow OpenAI to have the tool “tested” by the “first curious” in order to detect its limits and flaws and thus to be able to improve it, in order to produce a new version which, for the time being, would be paid for. By the way, we find the marketing strategy quite ingenious!

ChatGPT is presented to the public as a prototype conversational agent using artificial intelligence and specialised in dialogue. It is capable of producing articulate, well-organised and ‘logical’ responses to questions or requests addressed to it. These answers are very similar to what a human could produce, which is why it will soon be astonishing.

First wonders…first fears!

Faced with the level of ChatGPT’s conversational capabilities, it will very quickly arouse a mixed feeling of wonder and fear, within the communities of technophiles, lawyers, haquers, IT journalists, academics etc. that we will call for the purposes of this article: the “early curious”.

It should also be noted that this mixed feeling and the numerous reactions to the tool are focused solely on its prowess, which inhibits any warning of the limitations of the conversational agent; namely

  • The inability of the tool to produce answers about, or including information about, events that would have occurred after 2021 (due to the fact that the data that fed into its training was collected at the end of 2021 deadline).
  • Algorithmic biases, a weakness from which many AI-based tools suffer (depending on the quality of the data that was used in the training phase).
  • Ambiguity and falsity of some of the answers given to factual questions that may be asked

Now let’s let our children’s eyes marvel at ChatGPT’s abilities, because yes, this conversational assistant is capable of :

  • Producing productions worthy of student papers on a wide variety of topics
  • Producing marketing material (advertising messages, commercial emails etc)
  • Creating cover letters for all types of applications,
  • Creating stories, jokes etc., while being careful not to include any scandalous comments (e.g. misogynistic, racist, insulting etc.)
  • filter, block and not follow up on compromising requests and comments in particular; you’d think ChatGPT would have a moral.

As you can see, the list is long and depending on the field and the need it is still possible to discover other abilities of this tool. Moreover, some “early adopters” reveal that by combining ChatGPT with other software, it could be used for purposes other than dialogue, in particular to edit sites, produce advertising campaigns, develop applications, etc. However, one should not be mistaken, as all is not so rosy! The new advances do not only benefit bona fide users, but unfortunately some may use them for malicious purposes.

 

ChatGPT a boon for malicious users

Chat GPT could be a boon to malicious users of digital products and services. Given ChatGPT’s ability to produce texts of a certain argumentative and rhetorical quality, some users could use it to generate manipulative and misleading messages and press releases, particularly during election periods, which are designed to sow doubt. This practice would not be new, but the phenomenon could increase due to the fact that it would only take a few seconds to produce these messages, thanks to ChatGPT, which would increase their number and amplify their effect among the population. This would increase the number of such messages and amplify their effect on the population, making election smears or manipulation viral.

For cybercriminals, the emergence of ChatGPT is also a golden opportunity to increase and perfect phishing messages and emails. No more mistakes in phishing emails! Mistakes which, let’s face it, were part of the signs that allowed some people to recognise the phishing attempt. Indeed, the messages and emails written by ChatGPT could perfectly imitate the style and vocabulary of institutional emails in particular, which could increase the number of people trapped by cybercriminals.

Another possibility, and not the least, is the ability to publish malware. According to Check Point Research, a company specialising in cyber security: “ChatGPT is already being used by cybercriminals to create malware. The discussion history of a forum frequented by cybercriminals seems to show that hackers have created, thanks to the ChatGPT bot, software capable of stealing certain types of files from a Windows machine, as well as software capable of generating fake content (e-books, training, etc.) on the web”.

The observation is therefore fatal, ChatGPT serves both bona fide users and those with dark motivation. Vigilance must be exercised more than usual.

 

ChatGPT, a stillborn in the teaching world?

Of all the areas that ChatGPT could address, the academic world and teachers in general are probably the most alarmed.

Indeed, since its appearance in November 2022, the productive capacity of the tool has been decried as a danger to the authenticity and personal effort that must be embodied in every student production. The ‘early adopters’ immediately saw it as a use that could supplant real student productions through assignments, projects, research and even thesis.

Were they wrong?

Not quite! Because indeed, students, like everyone else, have taken hold of this tool and have tried to derive certain benefits from it, before seeing themselves limited by the euphoria for some. This temptation is amplified by the fact that since Covid, school and university evaluation methods have had to evolve to align with the ‘distance’. As a result, more and more homework, exams and other assessments are being carried out from the students’ and pupils’ homes.

In France, more precisely at the University of Lyon, half of the students of a master’s degree would have used ChatGPT to carry out an assignment. The latest news is that this information has been denied by the university, but whether it is true or not, it already shows the likelihood of this use within faculties and schools by learners.

Furthermore, still in France, in a letter sent to all students and teachers by Sergei Guriev, Director of Education and Research, the management of Sciences Po announced that “the use of the Internet is not yet a reality; the management of Sciences Po announces that “the use, without explicit mention, of ChatGPT at Sciences Po, or any other tool using AI is, except for pedagogical use supervised by a teacher, strictly forbidden for the time being during the production of written or oral work by students, under penalty of sanctions that can go as far as exclusion from the institution or even from higher education”, as reported by the news website Zataz.

This ban is justified by the institution by the need to respect the charter to which all students subscribe on enrolment, in particular to respect the principles of intellectual honesty during evaluations, on pain of permanent exclusion from the institution.

Elsewhere, notably in the United States, it would appear that the New York City Department of Education is also banning access to ChatGPT on the city’s public school computers. A spokeswoman for the city of New York justifies this decision because of “concerns about security and accuracy of content” (Julien Lausson, “ChatGPT scares New York into banning it from its schools”, on Numerama, 5 January 2023).

In France, the Ministry of Education states that it is “closely monitoring this issue and the potential uses of this innovation in schools, colleges and high schools”. Faced with these fears and initial preventive measures within the teaching world, certain solutions are already beginning to emerge. Just as there are solutions for detecting plagiarism in certain productions, software is gradually appearing that can detect whether a production has been made entirely or partially by ChatGPT.

Indeed, anti-plagiarism software has its limits when faced with ChatGPT productions, as the latter generates unique productions, i.e. attributed to no author to date. Thus, solutions should be developed to overcome such difficulties in detecting any contribution/production of ChatGPT.

Edward Tian seems to have taken up this challenge. This 22 year old Princeton student has developed an application capable of detecting the use of artificial intelligence, particularly ChatGPT, in productions. This application, called GPTZero, would therefore be able to solve the ethical problem that worries teachers, rightly or wrongly, about the authorship and authenticity of a production presented as such by students.

Since then, much other research has been carried out to overcome, like GPTZero, the difficulty of “tracking” ChatGPT in the smallest lines of student productions.

Conclusion

Whether feared or adored, ChatGPT remains an impressive demonstration of the capabilities of artificial intelligence. Despite the limitations that can be identified, it is important to note that its use can serve both good and bad causes; it all depends on the user’s motivations.

Does the appearance of ChatGPT open the door to ever more intelligent tools! ever more artificial! …. ?

In any case, it is becoming urgent to regulate the uses of artificial intelligence systems and “educate” users, especially young people, to be safe with these innovative tools. On the other hand, where does Africa stand on the issue of regulating the uses of artificial intelligence? Knowing that in practice, it will not be spared the impact of its expansion!

 

Martine Ndéo Diouf

Comments

  • No comments yet.
  • Add a comment