Wednesday, January 24, 2024

Using C# code generated by Azure OpenAI Studio

 In this article I will experience how one can use generated C# code in "Azure OpenAI Studio" to create your own application. Of course, it is assumed that you have an Azure subscription and access to Azure OpenAI.

Getting Started

Go to https://oai.azure.com.

Let us play in the “Chat playground”. Click on “Chat playground”.


We will start by grounding our AI bot. We can do that by telling the AI that it specializes in baking recipes. Enter the following to the "System Message":

You are an AI assistant that helps people with baking.

Next, add a prompt requesting a recipe:

Give me an easy recipe for baking a loaf of bread that takes about 30 minutes preparation and about 40 minutes in the oven.

The AI will suggest to you a recipe. This is the response I received:

Certainly! Here's a simple recipe for baking a loaf of bread with a total time of approximately 1 hour and 10 minutes:

Ingredients:
3 cups all-purpose flour
2 teaspoons active dry yeast
1 teaspoon salt
1 tablespoon sugar
1 cup warm water (around 110°F/43°C)
2 tablespoons olive oil

Instructions:
In a large mixing bowl, combine the warm water and sugar. Stir until the sugar is dissolved, then sprinkle the yeast over the water. Let it sit for about 5 minutes until the yeast becomes frothy.

Add the salt and olive oil to the yeast mixture, and gradually add the flour, one cup at a time. Stir well after each addition until a shaggy dough forms.

Once the dough starts to come together, transfer it onto a lightly floured surface and knead for about 5-7 minutes until it becomes smooth and elastic. If the dough is too sticky, you can add a little more flour as needed.

Shape the dough into a ball, and place it in a greased bowl. Cover the bowl with a clean kitchen towel or plastic wrap, and let the dough rise in a warm place for approximately 30 minutes or until it doubles in size.

Preheat your oven to 375°F (190°C) while the dough is rising.

After the dough has risen, punch it down gently to release any air bubbles. Transfer it to a greased loaf pan, and shape it into a loaf by tucking the edges under.

Cover the loaf pan again with the kitchen towel or plastic wrap, and let it rise for an additional 10 minutes.

Once the dough has risen for the second time, remove the cover and place the pan in the preheated oven. Bake for about 35-40 minutes or until the bread is golden brown and sounds hollow when tapped on the bottom.

Once baked, remove the bread from the oven and let it cool in the pan for a few minutes. Then transfer it to a wire rack to cool completely before slicing.

Enjoy your freshly baked bread!

Using generated C# code

Click on “View code”.


 Change the language to C#.

Note the message at the top. You are asked to add a specific version of a package in your C# app.

// Note: The Azure OpenAI client library for .NET is in preview.
// Install the .NET library via NuGet: dotnet add package Azure.AI.OpenAI --version 1.0.0-beta.5

To get started with a basic console application, click on the “Learn more” link at the bottom.

Choose C#.


Under "Set up" you will be asked to create a new app and add a package to it.

dotnet new console -n azure-openai-quickstart
cd azure-openai-quickstart
dotnet add package Azure.AI.OpenAI --prerelease

Run the above commands, then replace the code in Program.cs with the code that was generated by “Azure AI Studio”.

You will need to enter the AZURE_OPENAI_API_KEY at around line 8 in Program.cs. This is given to you just below the sample code in “Azure OpenAI Studio”.

Copy and paste the key into your code. This is what my code looked like after pasting the key:

If you run “dotnet build”, you will see some errors. This is because we did not use the specific preview version of the Azure.AI.OpenAI package that was suggested. Most likely you have a more recent version. The version I have at the time of writing (January 2024) is 1.0.0-beta.12. All the errors pertain to the ChatMessage property when creating ChatCompletionsOptions. Replace the code for responseWithoutStream with the following:

Response<ChatCompletions> responseWithoutStream = await client.GetChatCompletionsAsync(
   new ChatCompletionsOptions() {
    DeploymentName="gpt-35-turbo",
    Messages =
    {
      new ChatRequestSystemMessage(@"You are an AI assistant that helps people find information."),
      new ChatRequestUserMessage(@"Give me an easy recipe for baking a loaf of bread that takes about 30 minutes preparation and about 40 minutes in the oven."),     
    },
    Temperature = (float)0.7,
    MaxTokens = 800,


    NucleusSamplingFactor = (float)0.95,
    FrequencyPenalty = 0,
    PresencePenalty = 0,
   });

Since nothing is output, let us display the AI response. Add the following code to the bottom of Program.cs:

Console.WriteLine(response.Choices[0].Message.Content);

Run the app and you will see the response from the AI. In my case I received a very similar response to what I previously got.

Conclusion

"Azure OpenAI Studio" can help you get started with the development of a C# app that utilizes services from "Azure OpenAI".


Monday, January 15, 2024

PHP meets OpenAI with image generation

Let's generate images using OpenAI's dall-e-3 service. When using PHP, the open-source openai-php/client library facilitates the process. Check it out at https://github.com/openai-php/client.

In this article, we will learn how easy it is to generate an image with OpenAI and PHP. 

Source code: https://github.com/medhatelmasry/openai-dalle3-php

Prerequisites

In order to proceed, you will need the following:

  1. Subscription with OpenAI - If you do not have a subscription, go ahead and register at https://openai.com/.
  2. PHP - You need to have PHP version 8.2 (or higher) installed on your computer. You can download the latest version from https://www.php.net/downloads.php.
  3. Composer – If you do not have Composer yet, download and install it for your operating system from https://getcomposer.org/download/.

Getting Started

In a suitable working directory, create a folder named openai-dalle3-php with the following terminal window command:

mkdir openai-dalle3-php

Change into the newly created folder with:

cd openai-dalle3-php

Using Composer, install the openai-php/client package by running this command:

composer require openai-php/client

We will be reading our ApoenAPI key from the .env text file. We need to install this package in order to do that.

composer require vlucas/phpdotenv

Getting an API KEY from OpenAI

With your OpenAI credentials, login into https://openai.com/.  Click on API.

In the left navigation, click on "API keys". 


Click on the "+ Create new secret key" button.


Give the new key a name. I named it 20240115 representing the date when it was created. You may wish to give it a more meaningful or creative name. Once you are happy with the name, click on the "Create secret key" to generate the key.


Click on the copy button beside the key and paste the API-Key somewhere safe as we will need to use it later on. Note that you cannot view this key again. Click on Done to dismiss the dialog.

We will create a text file named .env in our  openai-dalle3-php folder with the following content:

OPENAI_API_KEY=put-your-openai-api-key-here

Set the API-Key as the value of OPENAI_API_KEY. This may look like this:

OPENAI_API_KEY=sk-OOghjTs8GsuHQklTCWOeT3BasdGJAjklBr3tr8ViZKv21BRN

Let's get coding

We can generate images by obtaining a URL to the newly created image, or by getting the Base-64 representation of the image. We will try both ways.

1) Get URL of the image

In the openai-dalle3-php folder, create a file named image-url.php with the following content:

<?php

require_once __DIR__ . '/vendor/autoload.php';

$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
 
$client = OpenAI::client($_ENV['OPENAI_API_KEY']);

$response = $client->images()->create([
    'model' => 'dall-e-3',
    'prompt' => 'A panda flying over Paris at night',
    'n' => 1,
    'size' => '1024x1024',
    'response_format' => 'url',
]);

foreach ($response->data as $data) {
    $data->url; 
    $data->b64_json; // null
}

// display the image
echo '<img src="' . $data->url . '" />';

?>

In the above code, note the following:

  • we read in the API Key from .env file and pass it in the OpenAI::client($_ENV['OPENAI_API_KEY']); statement
  • we request a URL response with an image size 1024 x 1024
  • we prompt the dall-e-3 service to generate an image of 'A panda flying over Paris at night'.

To run the app, start the PHP web server to listen on port number 8888 with the following command in the openai-dalle3-php folder.

php -S localhost:8888

You can view the resulting image that gets created by OpenAI by pointing your browser to the following URL:

http://localhost:8888/image-url.php

This is the image that got created for me:


Every time you run the app you will likely get a different image.

2) Get Base64 encoding of the image

Create another file named image-b64.php with the following content:

    <?php

require_once __DIR__ . '/vendor/autoload.php';

$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
 
$client = OpenAI::client($_ENV['OPENAI_API_KEY']);

$response = $client->images()->create([
    'model' => 'dall-e-3',
    'prompt' => 'A panda flying over Paris at night',
    'n' => 1,
    'size' => '1024x1024',
    'response_format' => 'b64_json',
]);

foreach ($response->data as $data) {
    $data->url; // null
    $data->b64_json; 
}

// display base64 encoded image
echo '<img src="data:image/jpeg;base64,' . $data->b64_json. '" />';

?>

The only changes that were made are in the following lines of code:

(1) 'response_format' => 'b64_json',

Whereas previously, we requested a URL, this time we request base-64 encoding.

(2) echo '<img src="data:image/jpeg;base64,' . $data->b64_json. '" />';

When rendering the image, we use base64 rendering instead of an image URL.

Point your browser to http://localhost:8888/image-b64.php. This is what I experienced:

Conclusion

There are may services that you can consume from OpenAP, like chat completion, text completion, embeddings, etc.. Now that you know how things work, go ahead and try some of the other services.

Saturday, January 13, 2024

Generate Images with Azure OpenAI Dall-E 3, Semantic Kernel, and C#

It is very easy to generate images using the OpenAI Dall-E 3 service and Semantic Kernel. You provide the text describing what you want and OpenAI will generate for you the image. In this tutorial, we will use Semantic Kernel and Azure OpenAI to do exactly that.

Source Code: https://github.com/medhatelmasry/DalleImage.git

Companion Video: https://youtu.be/Dr727OhX4HU

What is Semantic Kernel?

This is the official definition obtained from Create AI agents with Semantic Kernel | Microsoft Learn:

Semantic Kernel is an open-source SDK that lets you easily build agents that can call your existing code. As a highly extensible SDK, you can use Semantic Kernel with models from OpenAI, Azure OpenAI, Hugging Face, and more! 

Getting Started

In a suitable directory, create a console application named DalleImage and add to it two packages needed for our application with the following terminal window commands:

dotnet new console -o DalleImage
cd DalleImage
dotnet add package Microsoft.SemanticKernel
dotnet add package System.Configuration.ConfigurationManager

Create a file named App.config in the root folder of the console application and add to it the important parameters that allow access to the Azure OpenAI service. Contents of App.config are like the following:

<?xml version="1.0"?>
<configuration>
    <appSettings>
        <add key="endpoint" value="https://fake.openai.azure.com/" />
        <add key="api-key" value="fakekey-fakekey-fakekey-fakekey" />
        <add key="gpt-deployment" value="gpt-35-turbo" />
        <add key="dalle-deployment" value="dall-e-3" />
    </appSettings>
</configuration>

NOTE: Since I cannot share the endpoint and apiKey with you, I have fake values for these settings.

Currently, the Dall-E 3 model is in preview and only available in the "Sweden Central" Azure data centre according to https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#dall-e-models-preview

Let's Code

Open Program.cs and delete all its contents. Add the following using statements at the top:

using System.Configuration;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.TextToImage;

We need to read the App.config file settings into our application. We will use the ConfigurationManager from namespace System.Configuration. To read settings from App.config with ConfigurationManager, append the following code to Program.cs:

// Get configuration settings from App.config
string _endpoint = ConfigurationManager.AppSettings["endpoint"]!;
string _apiKey = ConfigurationManager.AppSettings["api-key"]!;
string _dalleDeployment = ConfigurationManager.AppSettings["dalle-deployment"]!;
string _gptDeployment = ConfigurationManager.AppSettings["gpt-deployment"]!;

Currently, we need to disable certain warning directives by adding the following into the .csproj file inside the <PropertyGroup> block:

<NoWarn>SKEXP0001, SKEXP0002, SKEXP0011, SKEXP0012</NoWarn>

Then, append this code to Program.cs:

// Create a kernel builder
var builder = Kernel.CreateBuilder(); 
 
// Add OpenAI services to the kernel
builder.AddAzureOpenAITextToImage(_dalleDeployment, _endpoint, _apiKey);
builder.AddAzureOpenAIChatCompletion(_gptDeployment, _endpoint, _apiKey); 
 
// Build the kernel
var kernel = builder.Build();

W
e created a builder object from SematicKernel, added the AddAzureOpenAITextToImage and AddAzureOpenAIChatCompletion services, then obtained an instance of the kernel object.

Get an instance of the "Dall-E" service from the kernel with the following code:

// Get AI service instance used to generate images
var dallE = kernel.GetRequiredService<ITextToImageService>();

Let us create a prompt that generates an image representing a phrase entered by the user. Append this code to Program.cs:

// create execution settings for the prompt
var prompt = @"
Think about an artificial object that represents {{$input}}.";

We then configure the prompt execution settings with:

var executionSettings = new OpenAIPromptExecutionSettings {
    MaxTokens = 256,
    Temperature = 1
};

Temperature is a measure of how creative you want the AI to be. This ranges from 0 to 1, where 0 is least creative and 1 is most creative.

We will create a semantic function from our prompt with:

// create a semantic function from the prompt
var genImgFunction = kernel.CreateFunctionFromPrompt(prompt, executionSettings);

Let us ask the user for input with:

// Get a phrase from the user
Console.WriteLine("Enter a phrase to generate an image from: ");
string? phrase = Console.ReadLine();
if (string.IsNullOrEmpty(phrase)) {
    Console.WriteLine("No phrase entered.");
    return;
}

Next, ask the kernel to combine the prompt with the input received from to user, producing a description.

// Invoke the semantic function to generate an image description
var imageDescResult = await kernel.InvokeAsync(genImgFunction, new() { ["input"] = phrase });
var imageDesc = imageDescResult.ToString();

Finally, ask Dall-E service to do the important work of generating an image based on the description. It returns an image url. This is done with the following code:

// Use DALL-E 3 to generate an image. 
// In this case, OpenAI returns a URL (though you can ask to return a base64 image)
var imageUrl = await dallE.GenerateImageAsync(imageDesc.Trim(), 1024, 1024);

Let’s print the output URL so that the user can pop it into a browser to see what it looks like:

Console.WriteLine($"Image URL:\n\n{imageUrl}");

Running App

Let’s try it out. Run the app in a terminal window with:

dotnet run

The user is prompted with “Enter a phrase to generate an image from:”. I entered “a lobster flying over the pyramids in giza”, and received this output:


I find it pretty fascinating how OpenAI can generate images based on text-based descriptions. I hope you do too.

Wednesday, January 10, 2024

Build simple C# completion app with Azure OpenAI and Semantic Kernel Tool

In this walkthrough, I will show you how easy it is to use the 'Semantic Kernel Tool' in Visual Studio Code to create a cake baking skill without a single line of code. We will then build a C# console application that uses the skill.

Source: https://github.com/medhatelmasry/sk-library

Companion Video: https://youtu.be/eI5Pr58gFZg

What is Semantic Kernel?

This is the official definition obtained from Create AI agents with Semantic Kernel | Microsoft Learn:

Semantic Kernel is an open-source SDK that lets you easily build agents that can call your existing code. As a highly extensible SDK, you can use Semantic Kernel with models from OpenAI, Azure OpenAI, Hugging Face, and more! 

We now have an extension for Visual Studio Code that makes it very easy to build AI apps that use the large language models (LLMs) available through OpenAI

In order to proceed with this tutorial, you will need the following prerequisites:

  1. .NET 8.0 Framework
  2. Visual Studio Code
  3. Access to Azure OpenAI
  4. Install the 'Semantic Kernel Tool' extension into Visual Studio Code.

Getting Started

In a suitable working directory, create a folder named sk-library, change directory into the new folder, then start Visual Studio Code with the following terminal commands:

mkdir sk-library
cd sk-library
code .

In Visual Studio Code, select: View >> Command Palette...

Select "Add AI Endpoint" from the list.

You will get asked to choose between AzureOpenAI and OpenAI. I will choose AzureOpenAI in this example.


The next ask is the name of the model that was created on the Azure portal in the AzureOpenAI service. A suitable model for this completion task is text-davinci-003.

We will be asked to enter the Azure OpenAI Endpoint, which you can obtain from Azure.

Finally, we must enter the Azure OpenAI Key.


If all goes well, you will receive this comforting message.

Create a skill without any coding

We can now create a skill without a single line of code. Create sub-folders Skills/Baking with the following terminal commands:

mkdir Skills
cd Skills 
mkdir Baking
cd ..

Start the "Semantic Kernel" view in Visual Studio Code.

Click on "Add Semantic Skill" tool beside Functions.


Click on "Create a new skill folder for the function".


Choose the Skills/Baking folder.

Enter CakeRecipe for the function name.

A description for the function is required. Enter "Recipe for making a cake" for the description.

Two files get created in the Skills/Baking/CakeRecipe folder: skprompt.txt and config.json

skprompt.txt

config.json

Replace contents of skprompt.txt with the following:

I want to bake a fabulous cake. Give me a recipe using the input provided. The cake must be easy, tasty, and cheap. I don't want to spend more than $10 on ingredients. I don't want to spend more than 30 minutes preparing the cake. I don't want to spend more than 30 minutes baking the cake. 

[INPUT]

{{$input}}

[END INPUT]

The above file contains a prompt and a variable {{$input}}. The AI should give us a recipe for the type of cake that is entered to replace the {{$input}} variable.

Testing our baking skill

Creating a skill with the Visual Studio Code 'Semantic Kernel Tool' is painless. We can now test our baking skill. Click on the arrow on the top-right of the panel.


You will be asked to enter a type of cake that you are interested in baking. I entered: chocolate.


If you check the OUTPUT tab at the bottom of Visual Studio Code, you will see the results.


The actual output I received was:

AI Provider: AzureOpenAI
Model: text-davinci-003
Execute: Baking.CakeRecipe
Parameters:
input: chocolate
Prompt:
I want to bake a fabulous cake. Give me a recipe using the input
provided. The cake must be easy, tasty and cheap. I don't want to spend more than
$10 on ingredients. I don't want to spend more than 30 minutes preparing the
cake. I don't want to spend more than 30 minutes baking the cake. 
[INPUT]
chocolate
[END INPUT]
Result:
Easy Chocolate Cake Recipe
Ingredients:
- 1 ½ cups all-purpose flour
- 1 cup granulated sugar
- ¾ cup cocoa powder
- 1 teaspoon baking soda
- ½ teaspoon baking powder
- ½ teaspoon salt
- 2 eggs
- 1 cup buttermilk
- ½ cup vegetable oil
- 1 teaspoon vanilla extract
Instructions:
1. Preheat oven to 350°F. Grease and flour a 9-inch round cake pan.
2. In a large bowl, whisk together the flour, sugar, cocoa powder, baking soda, baking powder, and salt.
3. In a separate bowl, whisk together the eggs, buttermilk, oil, and vanilla extract.
4. Pour the wet ingredients into the dry ingredients and mix until just combined.
5. Pour the batter into the prepared cake pan and bake for 25-30 minutes, or until a toothpick inserted into the center comes out clean.
6. Allow the cake to cool in the pan for 10 minutes before transferring to a wire rack to cool completely.
7. Serve and enjoy!
Tokens:
Input tokens: 88
Output tokens: 237
Total: 325
Duration:
00:00:11.971
========== Function execution was finished. ==========

Using our baking skill in C# console app

Let us first create a console application in the root sk-library folder, with:

dotnet new console

We need to add two packages. One for SemanticKernel and the other is ConfigurationManager that allows us to read from settings in the App.config XML file.

dotnet add package Microsoft.SemanticKernel
dotnet add package System.Configuration.ConfigurationManager

Create a file named App.config in the root folder of the console application and add to it the important parameters that allow access to your Azure OpenAI service. Contents of App.config are like the following:

<?xml version="1.0"?>
<configuration>
    <appSettings>
        <add key="endpoint" value="https://fake.openai.azure.com/" />
        <add key="api-key" value="fakekey-fakekey-fakekey-fakekey" />
        <add key="deployment-name" value="text-davinci-003" />
    </appSettings>
</configuration>

NOTE: Since I cannot share the endpoint and apiKey, I have fake values for these settings.

Replace the code in Program.cs with the following code:

using System.Configuration;
using Microsoft.SemanticKernel;

string _endpoint = ConfigurationManager.AppSettings["endpoint"]!;
string _apikey = ConfigurationManager.AppSettings["api-key"]!;
string _deploymentname = ConfigurationManager.AppSettings["deployment-name"]!;

var builder = Kernel.CreateBuilder();

builder.Services
    .AddAzureOpenAITextGeneration(
        _deploymentname
        , _endpoint
        , _apikey);

var kernel = builder.Build();

var functionDirectory = Path.Combine(Directory.GetCurrentDirectory(), "Skills", "Baking");
var semanticFunctions = kernel.ImportPluginFromPromptDirectory(functionDirectory);

/* request user for input */
Console.WriteLine("Enter a cake type you want to bake:");
var cakeType = Console.ReadLine();
var functionResult = await kernel.InvokeAsync(semanticFunctions["CakeRecipe"],
    new KernelArguments {
                { "input", cakeType }
    });
Console.WriteLine(functionResult);
Console.WriteLine();

Run the app with:

dotnet run

You will be asked to enter a type of cake. I entered: lemon.


This was the output given by the AI.

Lemon Sponge Cake 
Ingredients: 
- 2 cups all-purpose flour 
- 2 teaspoons baking powder 
- ½ teaspoon salt 
- 4 tablespoons butter 
- 1 cup sugar 
- 2 eggs 
- 1 cup milk 
- juice and zest of one lemon
Instructions: 
1. Preheat oven to 350°F (175°C). Grease and flour an 8-inch cake pan.
2. In a medium bowl, sift together the flour, baking powder, and salt. 
3. In a large bowl, beat the butter and sugar together until light and fluffy.
4. Beat in the eggs, one at a time. 
5. Beat in the flour mixture alternately with the milk, beginning and ending with the flour mixture. 
6. Stir in the lemon juice and zest. 
7. Pour the batter into the prepared cake pan. 
8. Bake for 25-30 minutes or until a toothpick inserted into the center comes out clean. 
9. Allow the cake to cool in the pan before serving.

You can build applications with a variety of AI skills. 

Happy Coding.

Tuesday, January 9, 2024

Getting started with 'Semantic Kernel Tool' extension in Visual Studio Code

In this article, let us explore the "Semantic Kernel Tools" extension for Visual Studio Code. We will simply run the C# "Hello World" startup chat-completion application that comes with the the tool. The main purpose of this tutorial is to help you configure and run your first C# Semantic Kernel app with the Visual Studio extension.

What is Semantic Kernel?

This is the official definition obtained from Create AI agents with Semantic Kernel | Microsoft Learn:

Semantic Kernel is an open-source SDK that lets you easily build agents that can call your existing code. As a highly extensible SDK, you can use Semantic Kernel with models from OpenAI, Azure OpenAI, Hugging Face, and more! 

We now have an extension for Visual Studio Code that makes it very easy to build AI apps that use the large language models (LLMs) available through OpenAI

In order to proceed with this tutorial, you will need the following prerequisites:

  1. .NET 8.0 Framework
  2. Visual Studio Code
  3. Access to Azure OpenAI
  4. Install the 'Semantic Kernel Tool' extension into Visual Studio Code.

Getting Started

Once you have installed the 'Semantic Kernel Tool' extension, start Visual Studio Code. Click on View >> Command Palette:


Select "Semantic Kernel: Create Project.


Choose "C# Hello World".

Find a suitable working directory on your computer's file system, then click on the "Select location for new app" button.


A new directory named sk-csharp-hello-world is created in your working directory. In Visual Studio Code, you will see the following directories and files:


Expand the config folder. You will see that there are two appsettings.json files - one for Azure-OpenAI and the other for OpenAI. 

Since we will be using with Azure-OpenAI, copy the file named "appsettings.json.azure-example" to another file simply named "appsettings.json".

Open appsettings.json in the editor.

{
  "endpointType": "text-completion",
  "serviceType": "AzureOpenAI",
  "serviceId": "text-davinci-003",
  "deploymentOrModelId": "text-davinci-003",
  "endpoint": "https:// ... your endpoint ... .openai.azure.com/",
  "apiKey": "... your Azure OpenAI key ..."
}

We need to make an important adjustment to the deploymentOrModelId setting. The clue for what needs to be done comes from the config/KernelSettings.cs file. You will notice that it expects property names deploymentId and modelId - see lines 15 and 18 below:


Therefore, replace the deploymentOrModelId setting in appsettings.json with two settings deploymentId and modelId. Our appsettings.json now looks like this:

{
  "endpointType": "text-completion",
  "serviceType": "AzureOpenAI",
  "serviceId": "text-davinci-003",
  "deploymentId": "text-davinci-003",
  "modelId": "text-davinci-003",
  "endpoint": "https:// ... your endpoint ... .openai.azure.com/",
  "apiKey": "... your Azure OpenAI key ..."
}

Of course, the next step is to use the proper values for serviceId, deploymentId, modelId, endpoint, and apiKey. This depends on the names of the various settings in your Azure-OpenAI account. Here is what I have in my Azure-OpenAI account:



The final state of my appsettings.json file is very similar to below. Since I cannot share the endpoint and apiKey with the world, I have fake values for these settings.

{
  "endpointType": "text-completion",
  "serviceType": "AzureOpenAI",
  "serviceId": "gpt-3.5-turbo",
  "deploymentId": "gpt-35-turbo",
  "modelId": "gpt-35-turbo",
  "endpoint": "https://fake.openai.azure.com/",
  "apiKey": "fakekey-fakekey-fakekey-fakekey"
}

We can now run the application and see what it does. In a terminal window, enter:

dotnet run

Here is the interaction I had with the application:

% dotnet run

User > in the summertime

Assistant > In the summertime, the weather is usually warm and sunny. It's a great time to enjoy outdoor activities like swimming, hiking, and barbecues. Many people also go on vacations or spend time at the beach. It's a season of relaxation and fun!

User > 

The prompt that is central to the way the app works is found in prompts/Chat.yaml.

name: Chat
template: |
  <message role="system">You are a helpful assistant.</message>

  {{#each messages}}
    <message role="{{Role}}">{{~Content~}}</message>
  {{/each}}
template_format: handlebars
description: A function that uses the chat history to respond to the user.
input_variables:
  - name: messages
    description: The history of the chat.
    is_required: true

Now that you were able to get the "Hello World" app working with the "Semantic Kernel Tool" extension for Visual Studio Code, go ahead and explore the other startup application types.

Good luck.