Unlocking Advanced AI Use Cases with Semantic Kernel

By Chrishmal Fernando

A Brief History of Conversational Interfaces and Us

Since 2018, iTelaSoft has been at the forefront of developing conversational interfaces, well before the era of Generative AI. Our early projects included simple web-based virtual assistants as well as more complex use cases involving voice-based telephony applications and SMS-based real-time chat agents. During this time, Amazon Lex and Microsoft LUIS were the go-to tools for rapid development.

While these early conversational interfaces were innovative, the user experience often felt rigid and unnatural. Conversations seemed more like navigating a pre-programmed dialogue, only capable of handling a limited set of scenarios. This was because the frameworks of the time, functioned as finite state machines. As requirements evolved, extending these systems often pushed the limits of the existing frameworks, forcing us to rely on custom code—leading to solutions that became increasingly complex and difficult to maintain.
The Rise of Large Language Models
In 2022, the advent of Large Language Models (LLMs), starting with OpenAI, marked a turning point in how we build conversational interfaces. Although the programming model was initially unfamiliar, the results were promising.
We had to undergo a significant mindset change from “designing conversation paths” to “instructing AI to navigate a conversation” using well-crafted prompts in this new development model. As an internal project, we developed a virtual assistant for our employees and managers. Leveraging LangChain as the orchestration framework, we connected the virtual assistant to Microsoft Teams, creating a seamless and natural interaction experience for our team.
Innovating with Semantic Kernel
As an innovation partner, iTelaSoft collaborates with clients to add intelligent features to their applications. This often involves integrating AI capabilities such as prediction, personalisation, language understanding, and conversational interfaces. While our AI and data science team primarily worked with Python, many of our applications were built using .NET and JavaScript.
In late 2023, we discovered Microsoft’s Semantic Kernel, which was then an early research project. Semantic Kernel allowed us to effortlessly integrate generative AI capabilities into applications through a clean and intuitive interface. It became a crucial bridge between our AI and development teams, simplifying the integration process and enhancing collaboration.
How Semantic Kernel Transforms AI Development
Semantic Kernel offers a straightforward yet powerful interface for building generative AI use cases. It quickly matured into a robust orchestration framework that supports various LLM providers, AI memory services, and plugins to interact with external systems. Its use in Microsoft’s Copilot implementations underscored its reliability for enterprise-grade AI systems. Despite its simplicity, Semantic Kernel offers a flexible and potent architecture for complex AI scenarios.
Despite its simplicity, Semantic Kernel offers a flexible and potent architecture for complex AI scenarios.
Key Components include:
  • AI Services: Text, image, and voice-based AI capabilities.
  • Plugins: Tools for retrieving information or performing actions with external systems.
  • Planning: AI-driven action planning using plugins to produce desired outcomes.
  • Personas: Customisable agent characters tailored to user requests.
  • Filters: Hooks that enable control over the AI pipeline execution.
  • Vector Store Connectors: Well structured way to manage long term memory.
The Semantic Kernel SDK makes it easy to configure these aspects, addressing many challenges associated with deploying production-ready AI applications.
Enterprise-Grade Application Model
Semantic Kernel aligns with the core attributes of enterprise-grade, production-ready applications. Its AI services and plugins are decoupled and managed through Dependency Injection, while logging and telemetry features ensure trackability and observability. Solutions built with Semantic Kernel are also security-compliant and compatible with various deployment models, including serverless environments.
Testability and Responsible AI
Well-architected AI solutions built with Semantic Kernel are highly testable and more predictable. We frequently use XUnit tests during development and continuous deployment, simplifying the process of troubleshooting less daunting.
Given AI’s unpredictable nature, responsible AI practices are essential. Semantic Kernel provides middleware filters to monitor and mitigate potentially harmful scenarios, such as:
  • Preventing the sharing of sensitive information with external parties.
  • Blocking harmful actions performed through plugins.
  • Controlling excessive use of AI se rvices to manage costs.
    These filters enable us to maintain control over AI execution, ensuring safe and responsible use.
    The Art of Using Semantic Kernel

    At iTelaSoft, we’ve leveraged Semantic Kernel to develop a wide range of generative AI applications, from straightforward virtual assistants to complex conversation observers that invoke background actions based on the conversation state. Whether simple or intricate, Semantic Kernel provided us a blueprint for building scalable and reliable AI solutions.

    Complex Use Cases
    The behavior of AI applications can be fine-tuned by configuring personas, plugins, and AI services within Semantic Kernel. While the framework handles much of the planning, you can still influence outcomes by well-documenting plugins and crafting precise prompts. For instance, a scenario requiring user confirmation before taking an action can be achieved through careful configuration without the need for additional code.
    Adding AI to Line of Business Applications
    Semantic Kernel makes it easy to embed intelligence into web applications. Features like drafting messages or emails based on contextual data are simplified, while tasks such as automatic form filling based on clipboard text are made seamless.
    For us at iTelaSoft, Semantic Kernel has provided a standard design and development model for generative AI use cases. It has proven to be a simple yet powerful framework for building reliable, production-ready AI applications, significantly reducing development and troubleshooting time for our team. development and troubleshooting time for our team. If you are an enterprise with a Dotnet or Java development team, Semantic Kernel is probably the easiest way to add Gen AI capabilities to your development roadmap.
    If you need support, iTelaSoft is keen to share our experiences.
    Let us know how to help
    We love to hear what challenges you go through in your business and discuss how we can assist you.
    Contact Us Today
    Chrishmal Fernando
    AI Engineering Manager

    Services

    PartnersAbout Us

    Case Studies

    NewsCareersContact Us
    © 2009 – 2024 iTelaSoft (Pty) Ltd. All Rights Reserved