.Guarantee compatibility with several platforms, including.NET 6.0,. Web Platform 4.6.2, and.NET Standard 2.0 and also above.Decrease dependencies to stop variation disagreements and also the requirement for tiing redirects.Transcribing Audio Information.One of the major performances of the SDK is audio transcription. Designers can translate audio files asynchronously or in real-time. Below is actually an instance of exactly how to transcribe an audio file:.utilizing AssemblyAI.making use of AssemblyAI.Transcripts.var customer = new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For regional documents, comparable code can be used to achieve transcription.await making use of var stream = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.stream,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK additionally supports real-time sound transcription utilizing Streaming Speech-to-Text. This attribute is actually specifically beneficial for requests requiring immediate handling of audio information.using AssemblyAI.Realtime.await using var scribe = brand new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for acquiring sound from a mic as an example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( chunk)).await transcriber.CloseAsync().Using LeMUR for LLM Applications.The SDK integrates with LeMUR to permit creators to construct sizable foreign language design (LLM) applications on vocal records. Listed below is actually an example:.var lemurTaskParams = brand-new LemurTaskParams.Cause="Deliver a short review of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Intellect Versions.In addition, the SDK includes integrated assistance for audio knowledge versions, making it possible for view analysis as well as various other state-of-the-art functions.var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// FAVORABLE, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, see the main AssemblyAI blog.Image source: Shutterstock.