Generative AI: The Challenges And Some Solutions

0
154
Generative-AI

Generative AI is revolutionary but poses a few challenges too. However, there are quite a few solutions that are being developed to improve its effectiveness, and reduce inherent biases and copyright infringements. Smaller companies and startups will benefit from these.

Generative AI, epitomised by ChatGPT, is not a sudden phenomenon but a result of sustained efforts bolstered by the contributions of the open source community. The collaborative environment has been pivotal in realising practical applications of generative AI, from coding assistance and email composition to fostering creativity by generating new code or data.

Scope of generative AI in India: Addressing challenges and solutions

At a recent conference on open source, industry leaders and gurus explored pressing questions about the evolution of generative AI. Dr Pushpak Bhattacharyya, professor of computer science and engineering at IIT Bombay, said: “Generative AI aligns with the government of India’s Digital India initiative. A machine translation project called Bhashini, which uses generative AI, aims to translate from all Indian languages to all other Indian languages. It has applications in education, the judicial system, agricultural domain, and so on.”

Horizons of generative AI in the Indian context
  • Generative AI permeates various sectors in India, enhancing user experiences in content creation, drafting emails, and document summarisation.
  • In the legal sector, generative AI aids in summarising large volumes of documents, analysing cases, and improving the efficiency of legal professionals.
  • The agricultural industry benefits from AI advisory systems that analyse weather and soil data to provide recommendations for farming activities.
  • To maximise the benefits of AI, developing user-friendly interfaces that support India’s many regional languages is crucial, leveraging advances in machine translation and NLP for effective communication.
  • These efforts align with the Digital India campaign’s goals to create a digitally empowered society.

Challenges and pitfalls of generative AI

Generative AI poses challenges inherent in its probabilistic nature, carrying biases and errors due to imperfect training data. Dr Vinesh Sukumar, senior director and head of AI/ML product management at Qualcomm Technologies, highlighted this in his talk. He said: “Given its penetration, given its attraction, and given its attention to consumers, it’s also important that we put some sanity checks around generative AI.”

These systems also lack the human element of emotion and sentiment, presenting a distinct set of hurdles. To reframe the issue, the effectiveness of these models should be considered beyond their primary function of predicting the next word or completing sentences. Typically, it should be measured through benchmarks that assess AI’s performance in more complex tasks such as answering questions, summarising texts, translating languages, and engaging in multimodal tasks like image captioning or video generation.

These benchmarks rely on established metrics like BLEU and F scores, which are already integrated into many machine translation systems. The evaluation of generative AI can be broken down into three main categories:

  • Its ethical responsibility, including how it manages biases and inappropriate content.
  • Its utility in downstream applications.
  • Its consistency and reliability in output.

Another layer is how AI itself understands and relates to its generated content, which is particularly relevant in the context of copyright issues. Ensuring that AI-generated content does not violate copyright laws is critical, and mechanisms for authenticating such content are being developed.

For example, the Content Authentication Initiative, backed by significant corporations and launched by the C2PA in 2022, is tackling misinformation by embedding verifiable metadata into digital content. Although initially focused on visual content, this concept has the potential to extend to other forms, such as text and music.

In essence, the industry is actively seeking solutions to these complex problems. With the pace at which generative AI is evolving, along with the legal intricacies that organisations like OpenAI face, significant advancements can be anticipated soon.

Strategies for overcoming challenges in generative AI

The challenges associated with generative AI can be addressed by strategic optimisation. A key strategy is model distillation, which involves condensing the capabilities of a giant AI model into a more compact version suitable for deployment on devices with limited processing power or by smaller companies lacking the computational might of industry leaders.

Pruning is another effective technique, which entails eliminating less critical neurons or connections to streamline the model without substantially sacrificing performance. Quantisation further aids in this streamlining process by lowering the precision of the model’s numerical weights, thus decreasing its overall footprint.

For smaller enterprises, these methods are beneficial and often necessary due to their limited resources. Instead of building large-scale models from the ground up, these companies can adapt existing pre-trained models to their specific needs through transfer learning, which involves retraining an already trained model with a smaller, task-specific data set to harness its capabilities at a fraction of the cost.

For startups with limited resources, it is more practical to utilise open source pre-trained models, such as those available through Hugging Face’s Transformers library, and to fine-tune them for specific applications. The release of Meta’s LLaMA version 2 as open source, also accessible via Hugging Face, exemplifies the growing availability of advanced tools for smaller players.

“Generative AI is like an athletic person who can excel across various sports when fine-tuned.” – Dr Pushpak Bhattacharyya, professor of Computer Science and Engineering at IIT Bombay

“The open source community has always been playing a catch-up game with the closed source models.” – Kamalkumar Rathinasamy, distinguished technologist and the head of Generative AI Research at Infosys

“Data is critical. And we need many different tools to manage data, data workflows, pipelines, sorting the data, annotating it, tagging it, anonymising it.”
– Dr Ibrahim Haddad, executive director of LF AI & Data Foundation

In summary, for startups and smaller companies, the focus should be on acquiring quality data and leveraging these advanced, open source tools and pre-trained models to sidestep the prohibitive costs of developing foundational models from scratch.

The journey of generative AI is as much about the technology itself as it is about the global community that shapes its evolution. From the bustling tech hubs of India to the open source repositories that span the digital universe, collaborative creation has been the lifeblood of this transformative field. As we stand on the brink of discoveries and applications that will undoubtedly reshape our world, the spirit of open collaboration remains the guiding star, ensuring that the evolution of generative AI will be as dynamic and inclusive as the myriad voices that contribute to its ongoing story.


This article is based on a tech talk session at SOSCON (Samsung Open Source Conference) India 2023, which was held online and organised by Samsung.

LEAVE A REPLY

Please enter your comment!
Please enter your name here