diff --git a/README.md b/README.md index 0cea198a0c026b59af475de20c8826965178ade0..cb68c103d3dafa3884fa5ba029f2b029e2cacf54 100644 --- a/README.md +++ b/README.md @@ -1,39 +1,16 @@ -# Audio - -## Introduction - The audio framework is used to implement audio-related features, including audio playback, audio recording, volume management, and device management. -**Figure 1** Architecture of the audio framework - - -![](figures/en-us_image_0000001152315135.png) - ### Basic Concepts -- **Sampling** - - Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. - -- **Sampling rate** - - Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, the human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz. - -- **Channel** - - Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback. - -- **Audio frame** +- **Sampling**: A process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. +- **Sampling Rate**: The number of samples extracted per second from a continuous signal to form a discrete signal. It is measured in Hz. Common sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz. +- **Channel**: Refers to different spatial positions where independent audio signals are recorded or played. The number of channels indicates the number of audio sources used during recording or the number of speakers used during playback. +- **Audio Frame**: Represents a data unit in audio processing, typically containing a small duration of audio data (2.5 to 60 milliseconds). This unit is referred to as a sampling time. +- **PCM (Pulse Code Modulation)**: A method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples. - Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements. +### Directory Structure -- **PCM** - - Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples. - -## Directory Structure - -The structure of the repository directory is as follows: +The repository structure is as follows: ``` /foundation/multimedia/audio_standard # Service code of the audio framework @@ -50,171 +27,170 @@ The structure of the repository directory is as follows: └── bundle.json # Build file ``` -## Usage Guidelines +### Usage Guidelines -### Audio Playback +#### Audio Playback -You can use the APIs provided in the current repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following describes how to use the **AudioRenderer** class to develop the audio playback feature: +To implement audio playback: -1. Call **Create()** with the required stream type to create an **AudioRenderer** instance. +1. Create an `AudioRenderer` instance with the required stream type. + ```cpp + AudioStreamType streamType = STREAM_MUSIC; + std::unique_ptr audioRenderer = AudioRenderer::Create(streamType); + ``` - ``` - AudioStreamType streamType = STREAM_MUSIC; // Stream type example. - std::unique_ptr audioRenderer = AudioRenderer::Create(streamType); - ``` +2. (Optional) Use static APIs like `GetSupportedFormats()`, `GetSupportedChannels()`, `GetSupportedEncodingTypes()`, and `GetSupportedSamplingRates()` to determine supported parameters. -2. (Optional) Call the static APIs **GetSupportedFormats()**, **GetSupportedChannels()**, **GetSupportedEncodingTypes()**, and **GetSupportedSamplingRates()** to obtain the supported values of parameters. -3. Prepare the device and call **SetParams()** to set parameters. +3. Set parameters using `SetParams()`: + ```cpp + AudioRendererParams rendererParams; + rendererParams.sampleFormat = SAMPLE_S16LE; + rendererParams.sampleRate = SAMPLE_RATE_44100; + rendererParams.channelCount = STEREO; + rendererParams.encodingType = ENCODING_PCM; - ``` - AudioRendererParams rendererParams; - rendererParams.sampleFormat = SAMPLE_S16LE; - rendererParams.sampleRate = SAMPLE_RATE_44100; - rendererParams.channelCount = STEREO; - rendererParams.encodingType = ENCODING_PCM; + audioRenderer->SetParams(rendererParams); + ``` - audioRenderer->SetParams(rendererParams); - ``` +4. (Optional) Use `GetParams(rendererParams)` to retrieve the set parameters. -4. (Optional) Call **GetParams(rendererParams)** to obtain the parameters set. -5. Call **Start()** to start an audio playback task. -6. Call **GetBufferSize()** to obtain the length of the buffer to be written. +5. Start the playback task with `Start()`. - ``` - audioRenderer->GetBufferSize(bufferLen); - ``` +6. Use `GetBufferSize()` to determine the buffer length for writing data. + ```cpp + audioRenderer->GetBufferSize(bufferLen); + ``` -7. Call **bytesToWrite()** to read the audio data from the source (such as an audio file) and pass it to a byte stream. You can repeatedly call this API to write rendering data. +7. Use `Write(buffer, bytesToWrite)` to pass the audio data to a byte stream. This API can be called repeatedly to write data. - ``` - bytesToWrite = fread(buffer, 1, bufferLen, wavFile); - while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) { - bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten); - if (bytesWritten < 0) - break; - } - ``` +8. Call `Drain()` to clear the stream before stopping. -8. Call **Drain()** to clear the streams to be played. -9. Call **Stop()** to stop the output. -10. After the playback task is complete, call **Release()** to release resources. +9. Stop the playback with `Stop()`. -The preceding steps describe the basic development scenario of audio playback. +10. Release resources using `Release()` after playback completes. +11. Use `SetVolume(float)` and `GetVolume()` to adjust and retrieve the volume of the audio stream, which ranges from 0.0 to 1.0. -11. Call **SetVolume(float)** and **GetVolume()** to set and obtain the audio stream volume, which ranges from 0.0 to 1.0. +For more details, refer to [audio_renderer.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiorenderer/include/audio_renderer.h) and [audio_info.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). -For details, see [**audio_renderer.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiorenderer/include/audio_renderer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). +#### Audio Recording -### Audio Recording +To implement audio recording: -You can use the APIs provided in the current repository to record audio via an input device, convert the audio into audio data, and manage recording tasks. The following describes how to use the **AudioCapturer** class to develop the audio recording feature: +1. Create an `AudioCapturer` instance with the required stream type. + ```cpp + AudioStreamType streamType = STREAM_MUSIC; + std::unique_ptr audioCapturer = AudioCapturer::Create(streamType); + ``` -1. Call **Create()** with the required stream type to create an **AudioCapturer** instance. +2. (Optional) Use static APIs like `GetSupportedFormats()`, `GetSupportedChannels()`, `GetSupportedEncodingTypes()`, and `GetSupportedSamplingRates()` to determine supported parameters. - ``` - AudioStreamType streamType = STREAM_MUSIC; - std::unique_ptr audioCapturer = AudioCapturer::Create(streamType); - ``` +3. Set parameters using `SetParams()`: + ```cpp + AudioCapturerParams capturerParams; + capturerParams.sampleFormat = SAMPLE_S16LE; + capturerParams.sampleRate = SAMPLE_RATE_44100; + capturerParams.channelCount = STEREO; + capturerParams.encodingType = ENCODING_PCM; -2. (Optional) Call the static APIs **GetSupportedFormats()**, **GetSupportedChannels()**, **GetSupportedEncodingTypes()**, and **GetSupportedSamplingRates()** to obtain the supported values of parameters. -3. Prepare the device and call **SetParams()** to set parameters. + audioCapturer->SetParams(capturerParams); + ``` - ``` - AudioCapturerParams capturerParams; - capturerParams.sampleFormat = SAMPLE_S16LE; - capturerParams.sampleRate = SAMPLE_RATE_44100; - capturerParams.channelCount = STEREO; - capturerParams.encodingType = ENCODING_PCM; +4. (Optional) Use `GetParams(capturerParams)` to retrieve the set parameters. - audioCapturer->SetParams(capturerParams); - ``` +5. Start the recording task with `Start()`. -4. (Optional) Call **GetParams(capturerParams)** to obtain the parameters set. -5. Call **Start()** to start an audio recording task. -6. Call **GetBufferSize()** to obtain the length of the buffer to be written. +6. Use `GetBufferSize()` to determine the buffer length for reading data. + ```cpp + audioCapturer->GetBufferSize(bufferLen); + ``` - ``` - audioCapturer->GetBufferSize(bufferLen); - ``` +7. Use `Read(buffer, bufferLen, isBlocking)` to read captured audio data. This API can be called repeatedly until manually stopped. + ```cpp + bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead); + while (numBuffersToCapture) { + bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead); + if (bytesRead < 0) { + break; + } else if (bytesRead > 0) { + fwrite(buffer, size, bytesRead, recFile); // Writes the recorded data into a file + numBuffersToCapture--; + } + } + ``` -7. Call **bytesRead()** to read the captured audio data and convert it to a byte stream. The application will repeatedly call this API to read data until it is manually stopped. +8. (Optional) Use `Flush()` to clear the recording stream buffer. - ``` - // set isBlocking = true/false for blocking/non-blocking read - bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking); - while (numBuffersToCapture) { - bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead); - if (bytesRead < 0) { - break; - } else if (bytesRead > 0) { - fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file - numBuffersToCapture--; - } - } - ``` +9. Stop the recording with `Stop()`. -8. (Optional) Call **Flush()** to clear the recording stream buffer. -9. Call **Stop()** to stop recording. -10. After the recording task is complete, call **Release()** to release resources. +10. Release resources using `Release()` after recording completes. -For details, see [**audio_capturer.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocapturer/include/audio_capturer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). +For more details, refer to [audio_capturer.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocapturer/include/audio_capturer.h) and [audio_info.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). -### Audio Management -You can use the APIs provided in the [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) to control the volume and devices. -1. Call **GetInstance()** to obtain an **AudioSystemManager** instance. - ``` - AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance(); - ``` -#### Volume Control -2. Call **GetMaxVolume()** and **GetMinVolume()** to obtain the maximum volume and minimum volume allowed for an audio stream. - ``` - AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC; - int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType); - int32_t minVol = audioSystemMgr->GetMinVolume(streamType); - ``` -3. Call **SetVolume()** and **GetVolume()** to set and obtain the volume of the audio stream, respectively. - ``` - int32_t result = audioSystemMgr->SetVolume(streamType, 10); - int32_t vol = audioSystemMgr->GetVolume(streamType); - ``` -4. Call **SetMute()** and **IsStreamMute** to set and obtain the mute status of the audio stream, respectively. - ``` - int32_t result = audioSystemMgr->SetMute(streamType, true); - bool isMute = audioSystemMgr->IsStreamMute(streamType); - ``` -5. Call **SetRingerMode()** and **GetRingerMode()** to set and obtain the ringer mode, respectively. The supported ringer modes are the enumerated values of **AudioRingerMode** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). - ``` - int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT); - AudioRingerMode ringMode = audioSystemMgr->GetRingerMode(); - ``` -6. Call **SetMicrophoneMute()** and **IsMicrophoneMute()** to set and obtain the mute status of the microphone, respectively. - ``` - int32_t result = audioSystemMgr->SetMicrophoneMute(true); - bool isMicMute = audioSystemMgr->IsMicrophoneMute(); - ``` -#### Device Control -7. Call **GetDevices**, **deviceType_**, and **deviceRole_** to obtain information about the audio input and output devices. For details, see the enumerated values of **DeviceFlag**, **DeviceType**, and **DeviceRole** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). - ``` - DeviceFlag deviceFlag = OUTPUT_DEVICES_FLAG; - vector> audioDeviceDescriptors - = audioSystemMgr->GetDevices(deviceFlag); - std::shared_ptr audioDeviceDescriptor = audioDeviceDescriptors[0]; - cout << audioDeviceDescriptor->deviceType_; - cout << audioDeviceDescriptor->deviceRole_; - ``` -8. Call **SetDeviceActive()** and **IsDeviceActive()** to activate or deactivate an audio device and obtain the device activation status, respectively. - ``` - DeviceType deviceType = DeviceType::DEVICE_TYPE_SPEAKER; - int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true); - bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType); - ``` -9. (Optional) Call other APIs, such as **IsStreamActive()**, **SetAudioParameter()**, and **GetAudioParameter()**, provided in [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) if required. -10. Call **AudioManagerNapi::On** to subscribe to system volume changes. If a system volume change occurs, the following parameters are used to notify the application: -**volumeType**: type of the system volume changed. -**volume**: current volume level. -**updateUi**: whether to show the change on the UI. (Set **updateUi** to **true** for a volume increase or decrease event, and set it to **false** for other changes.) - ``` +#### Audio Management + +Use the APIs in [audio_system_manager.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) to manage volume and devices. + +1. Obtain an `AudioSystemManager` instance: + ```cpp + AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance(); + ``` + +##### Volume Control + +2. Use `GetMaxVolume()` and `GetMinVolume()` to obtain the allowed volume range for a stream. + ```cpp + AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC; + int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType); + int32_t minVol = audioSystemMgr->GetMinVolume(streamType); + ``` + +3. Use `SetVolume()` and `GetVolume()` to set and retrieve the volume of the audio stream. + ```cpp + int32_t result = audioSystemMgr->SetVolume(streamType, 10); + int32_t vol = audioSystemMgr->GetVolume(streamType); + ``` + +4. Use `SetMute()` and `IsStreamMute()` to set and retrieve the mute status of the audio stream. + ```cpp + int32_t result = audioSystemMgr->SetMute(streamType, true); + bool isMute = audioSystemMgr->IsStreamMute(streamType); + ``` + +5. Use `SetRingerMode()` and `GetRingerMode()` to set and retrieve the ringer mode. The supported ringer modes are defined in `AudioRingerMode` in [audio_info.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). + +6. Use `SetMicrophoneMute()` and `IsMicrophoneMute()` to set and retrieve the mute status of the microphone. + ```cpp + int32_t result = audioSystemMgr->SetMicrophoneMute(true); + bool isMicMute = audioSystemMgr->IsMicrophoneMute(); + ``` + +##### Device Control + +7. Use `GetDevices(deviceFlag)` to obtain information about audio input and output devices. Use `deviceType_` and `deviceRole_` to identify the device type and role. + ```cpp + DeviceFlag deviceFlag = OUTPUT_DEVICES_FLAG; + vector> audioDeviceDescriptors = audioSystemMgr->GetDevices(deviceFlag); + std::shared_ptr audioDeviceDescriptor = audioDeviceDescriptors[0]; + cout << audioDeviceDescriptor->deviceType_; + cout << audioDeviceDescriptor->deviceRole_; + ``` + +8. Use `SetDeviceActive()` and `IsDeviceActive()` to activate or deactivate an audio device and check its activation status. + ```cpp + DeviceType deviceType = DeviceType::DEVICE_TYPE_SPEAKER; + int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true); + bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType); + ``` + +9. Use other APIs like `IsStreamActive()`, `SetAudioParameter()`, and `GetAudioParameter()` if needed. + +10. Use `AudioManagerNapi::On` to subscribe to system volume changes. The following parameters are used to notify the application of a volume change: + - `volumeType`: Type of the system volume changed. + - `volume`: Current volume level. + - `updateUi`: Indicates whether to show the change on the UI. + + ```cpp const audioManager = audio.getAudioManager(); export default { @@ -228,111 +204,103 @@ You can use the APIs provided in the [**audio_system_manager.h**](https://gitee. } ``` -#### Audio Scene -11. Call **SetAudioScene()** and **getAudioScene()** to set and obtain the audio scene, respectively. - ``` - int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL); - AudioScene audioScene = audioSystemMgr->GetAudioScene(); - ``` -For details about the supported audio scenes, see the enumerated values of **AudioScene** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). -#### Audio Stream Management -You can use the APIs provided in [**audio_stream_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiomanager/include/audio_stream_manager.h) to implement stream management. -1. Call **GetInstance()** to obtain an **AudioSystemManager** instance. - ``` - AudioStreamManager *audioStreamMgr = AudioStreamManager::GetInstance(); - ``` +##### Audio Scene -2. Call **RegisterAudioRendererEventListener()** to register a listener for renderer state changes. A callback will be invoked when the renderer state changes. You can override **OnRendererStateChange()** in the **AudioRendererStateChangeCallback** class. - ``` - const int32_t clientPid; - - class RendererStateChangeCallback : public AudioRendererStateChangeCallback { - public: - RendererStateChangeCallback = default; - ~RendererStateChangeCallback = default; - void OnRendererStateChange( - const std::vector> &audioRendererChangeInfos) override - { - cout<<"OnRendererStateChange entered"< callback = std::make_shared(); - int32_t state = audioStreamMgr->RegisterAudioRendererEventListener(clientPid, callback); - int32_t result = audioStreamMgr->UnregisterAudioRendererEventListener(clientPid); - ``` +##### Audio Stream Management -3. Call **RegisterAudioCapturerEventListener()** to register a listener for capturer state changes. A callback will be invoked when the capturer state changes. You can override **OnCapturerStateChange()** in the **AudioCapturerStateChangeCallback** class. - ``` - const int32_t clientPid; - - class CapturerStateChangeCallback : public AudioCapturerStateChangeCallback { - public: - CapturerStateChangeCallback = default; - ~CapturerStateChangeCallback = default; - void OnCapturerStateChange( - const std::vector> &audioCapturerChangeInfos) override - { - cout<<"OnCapturerStateChange entered"< callback = std::make_shared(); - int32_t state = audioStreamMgr->RegisterAudioCapturerEventListener(clientPid, callback); - int32_t result = audioStreamMgr->UnregisterAudioCapturerEventListener(clientPid); - ``` -4. Call **GetCurrentRendererChangeInfos()** to obtain information about all running renderers, including the client UID, session ID, renderer information, renderer state, and output device details. - ``` - std::vector> audioRendererChangeInfos; - int32_t currentRendererChangeInfo = audioStreamMgr->GetCurrentRendererChangeInfos(audioRendererChangeInfos); - ``` +1. Obtain an `AudioStreamManager` instance: + ```cpp + AudioStreamManager *audioStreamMgr = AudioStreamManager::GetInstance(); + ``` -5. Call **GetCurrentCapturerChangeInfos()** to obtain information about all running capturers, including the client UID, session ID, capturer information, capturer state, and input device details. - ``` - std::vector> audioCapturerChangeInfos; - int32_t currentCapturerChangeInfo = audioStreamMgr->GetCurrentCapturerChangeInfos(audioCapturerChangeInfos); - ``` - For details, see **audioRendererChangeInfos** and **audioCapturerChangeInfos** in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). +2. Register a listener for renderer state changes using `RegisterAudioRendererEventListener()`. Override `OnRendererStateChange()` in the `AudioRendererStateChangeCallback` class. + ```cpp + const int32_t clientPid; -6. Call **IsAudioRendererLowLatencySupported()** to check whether low latency is supported. - ``` - const AudioStreamInfo &audioStreamInfo; - bool isLatencySupport = audioStreamMgr->IsAudioRendererLowLatencySupported(audioStreamInfo); - ``` -#### Using JavaScript APIs -JavaScript applications can call the audio management APIs to control the volume and devices. -For details, see [**js-apis-audio.md**](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/apis-audio-kit/js-apis-audio.md#audiomanager). + class RendererStateChangeCallback : public AudioRendererStateChangeCallback { + public: + RendererStateChangeCallback() = default; + ~RendererStateChangeCallback() = default; + void OnRendererStateChange( + const std::vector> &audioRendererChangeInfos) override + { + cout << \"OnRendererStateChange entered\" << endl; + } + }; -### Bluetooth SCO Call -You can use the APIs provided in [**audio_bluetooth_manager.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/frameworks/native/bluetoothclient/audio_bluetooth_manager.h) to implement Bluetooth calls over synchronous connection-oriented (SCO) links. + std::shared_ptr callback = std::make_shared(); + int32_t state = audioStreamMgr->RegisterAudioRendererEventListener(clientPid, callback); + int32_t result = audioStreamMgr->UnregisterAudioRendererEventListener(clientPid); + ``` -1. Call **OnScoStateChanged()** to listen for SCO link state changes. - ``` - const BluetoothRemoteDevice &device; - int state; - void OnScoStateChanged(const BluetoothRemoteDevice &device, int state); - ``` +3. Register a listener for capturer state changes using `RegisterAudioCapturerEventListener()`. Override `OnCapturerStateChange()` in the `AudioCapturerStateChangeCallback` class. + ```cpp + const int32_t clientPid; + + class CapturerStateChangeCallback : public AudioCapturerStateChangeCallback { + public: + CapturerStateChangeCallback() = default; + ~CapturerStateChangeCallback() = default; + void OnCapturerStateChange( + const std::vector> &audioCapturerChangeInfos) override + { + cout << \"OnCapturerStateChange entered\" << endl; + } + }; + + std::shared_ptr callback = std::make_shared(); + int32_t state = audioStreamMgr->RegisterAudioCapturerEventListener(clientPid, callback); + int32_t result = audioStreamMgr->UnregisterAudioCapturerEventListener(clientPid); + ``` + +4. Use `GetCurrentRendererChangeInfos()` to obtain information about all running renderers, including client UID, session ID, renderer info, renderer state, and output device details. + ```cpp + std::vector> audioRendererChangeInfos; + int32_t currentRendererChangeInfo = audioStreamMgr->GetCurrentRendererChangeInfos(audioRendererChangeInfos); + ``` + +5. Use `GetCurrentCapturerChangeInfos()` to obtain information about all running capturers, including client UID, session ID, capturer info, capturer state, and input device details. + ```cpp + std::vector> audioCapturerChangeInfos; + int32_t currentCapturerChangeInfo = audioStreamMgr->GetCurrentCapturerChangeInfos(audioCapturerChangeInfos); + ``` + +6. Use `IsAudioRendererLowLatencySupported()` to check whether low latency is supported. + ```cpp + const AudioStreamInfo &audioStreamInfo; + bool isLatencySupport = audioStreamMgr->IsAudioRendererLowLatencySupported(audioStreamInfo); + ``` -2. (Optional) Call the static API **RegisterBluetoothScoAgListener()** to register a Bluetooth SCO listener, and call **UnregisterBluetoothScoAgListener()** to unregister the listener when it is no longer required. -## Supported Devices -The following lists the device types supported by the audio framework. +##### Using JavaScript APIs -1. **USB Type-C Headset** +JavaScript applications can use the audio management APIs to control volume and devices. For more information, see [js-apis-audio.md](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/apis-audio-kit/js-apis-audio.md#audiomanager). - A digital headset that consists of its own digital-to-analog converter (DAC) and amplifier that functions as part of the headset. +##### Bluetooth SCO Call -2. **WIRED Headset** +Use the APIs in [audio_bluetooth_manager.h](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/frameworks/native/bluetoothclient/audio_bluetooth_manager.h) to implement Bluetooth calls over synchronous connection-oriented (SCO) links. - An analog headset that does not contain any DAC. It can have a 3.5 mm jack or a USB-C socket without DAC. +1. Use `OnScoStateChanged()` to listen for SCO link state changes. + ```cpp + const BluetoothRemoteDevice &device; + int state; + void OnScoStateChanged(const BluetoothRemoteDevice &device, int state); + ``` -3. **Bluetooth Headset** +2. (Optional) Use `RegisterBluetoothScoAgListener()` to register a Bluetooth SCO listener and `UnregisterBluetoothScoAgListener()` to unregister it when no longer needed. - A Bluetooth Advanced Audio Distribution Mode (A2DP) headset for wireless audio transmission. +### Supported Devices -4. **Internal Speaker and MIC** +The audio framework supports the following device types: - A device with a built-in speaker and microphone, which are used as default devices for playback and recording, respectively. +1. **USB Type-C Headset**: A digital headset with its own DAC and amplifier. +2. **Wired Headset**: An analog headset without a DAC, which may have a 3.5 mm jack or a USB-C socket without DAC. +3. **Bluetooth Headset**: A Bluetooth A2DP headset for wireless audio transmission. +4. **Internal Speaker and MIC**: Devices with built-in speakers and microphones, used as default devices for playback and recording, respectively. -## Repositories Involved +### Repositories Involved -[multimedia\_audio\_framework](https://gitee.com/openharmony/multimedia_audio_framework) +[multimedia_audio_framework](https://gitee.com/openharmony/multimedia_audio_framework) \ No newline at end of file