Need suggestion how to perform time critical operation?

G

Guest

Hello everyone:


I am normally not an advocate of increasing the priority of a running
process or of a thread but it looks like I have to choice. I'm writing a
small app that plays a line-in signal via speakers using low level Digital
Audio API and as you can imagine it is quite time critical. The app works
fine when computer is idle but then even when I minimize a window (any
window), let alone do something more involving the play-back breaks with
crackling.

I came up with an idea to place the whole "low level Digital Audio" in a
thread and start it with the THREAD_PRIORITY_TIME_CRITICAL, but it didn't
really help much because the Base Priority wasn't set to the
THREAD_PRIORITY_TIME_CRITICAL due to Priority Class of the process still
being low. I truly don't want to change Priority Class of the app itself as
it will render the whole system slow.

I thought to put the "low level Digital Audio" part into DLL but it sounds
like DLL is loaded into a process space it is called from, thus I won't be
able to increase Priority Class for the process either without affecting my
main app. Am I write?

What else would you gurus suggest? I don't want to start coding and go into
a wrong direction...

Thank you in advance.
 
C

Carl Daniel [VC++ MVP]

dc2000 said:
Hello everyone:


I am normally not an advocate of increasing the priority of a running
process or of a thread but it looks like I have to choice. I'm
writing a small app that plays a line-in signal via speakers using
low level Digital Audio API and as you can imagine it is quite time
critical. The app works fine when computer is idle but then even when
I minimize a window (any window), let alone do something more
involving the play-back breaks with crackling.

I came up with an idea to place the whole "low level Digital Audio"
in a thread and start it with the THREAD_PRIORITY_TIME_CRITICAL, but
it didn't really help much because the Base Priority wasn't set to the
THREAD_PRIORITY_TIME_CRITICAL due to Priority Class of the process
still being low. I truly don't want to change Priority Class of the
app itself as it will render the whole system slow.

I thought to put the "low level Digital Audio" part into DLL but it
sounds like DLL is loaded into a process space it is called from,
thus I won't be able to increase Priority Class for the process
either without affecting my main app. Am I write?

You're not write, but you are right.
What else would you gurus suggest? I don't want to start coding and
go into a wrong direction...

Learn how to do asynchronous I/O. Unless you're trying to output audio data
one sample at a time, it's not at all time critical for a modern PC -
witness the fact that Windows Media player (or Winamp, or ...) can play
hours of seamless audio using only a fraction of a percent of CPU time.

Exactly which API(s) are you attempting to use to produce audio output?

-cd
 
G

Guest

Carl Daniel said:
You're not write, but you are right.


Learn how to do asynchronous I/O. Unless you're trying to output audio data
one sample at a time, it's not at all time critical for a modern PC -
witness the fact that Windows Media player (or Winamp, or ...) can play
hours of seamless audio using only a fraction of a percent of CPU time.

Exactly which API(s) are you attempting to use to produce audio output?

-cd


Here's in a nut shell. Tell me if I'm doing something wrong?

Recording:
======
- waveInOpen(nProcessingThreadID, CALLBACK_THREAD);
- waveInPrepareHeader();
- waveInAddBuffer();
- waveInStart()

PlayBack:
======
- waveOutOpen(nProcessingThreadID, CALLBACK_THREAD);
- waveOutPrepareHeader();
- waveOutWrite().

That is pretty much all that happens in the app itself.

Now the "ProcessingThread" in message routine:
(I was trying to up the priority of this thread)
===========
-if MM_WIM_DATA:
- waveInUnprepareHeader();
- remember data recorded in buffer;
- waveInPrepareHeader();
- waveInAddBuffer();

-if MM_WOM_DONE:
- waveOutUnprepareHeader();
- waveOutPrepareHeader();
- waveOutWrite().

You see it works really "transparently" for the system and uses no more than
5-6% of CPU time, but what I don't get is that when I minimize any window the
play-back halts for a fraction of a second. It also happens for a number of
other reasons, like running a new program, etc.

The reason I started to mess with priorities is because I looked at all the
threads that my app has (with Process Explorer) and there are two threads
that I did not start -- both from wdmaud.drv that are executing with the Base
Priority of 15 -- that are sound driver, I assume.
 
G

Guest

Carl Daniel said:
You're not write, but you are right.


Learn how to do asynchronous I/O. Unless you're trying to output audio data
one sample at a time, it's not at all time critical for a modern PC -
witness the fact that Windows Media player (or Winamp, or ...) can play
hours of seamless audio using only a fraction of a percent of CPU time.

Exactly which API(s) are you attempting to use to produce audio output?

-cd

I also want to add that Windows Media player does not record and play back
any real-time audio like I do. Playing data from a file is easier -- no real
synchronization needed.
 
C

Carl Daniel [VC++ MVP]

dc2000 said:
Here's in a nut shell. Tell me if I'm doing something wrong?

Well, I'm no multimedia expert, but I think your main problem is that you're
using the ancient (Windows 3.1-era) functions. For a new app, you should be
using Direct Sound or Direct Play with both support a much richer buffer
management model.

I'd suggest reposting on microsoft.public.platformsdk.multimedia or
microsoft.public.directx.audio where you'll probably get a more detailed
answer.

-cd
 
W

William DePalo [MVP VC++]

dc2000 said:
That is pretty much all that happens in the app itself.

Now the "ProcessingThread" in message routine:
(I was trying to up the priority of this thread)
===========
-if MM_WIM_DATA:
- waveInUnprepareHeader();
- remember data recorded in buffer;
- waveInPrepareHeader();
- waveInAddBuffer();

-if MM_WOM_DONE:
- waveOutUnprepareHeader();
- waveOutPrepareHeader();
- waveOutWrite().

I see that Carl has already pointed out that there are other ways to go
besides the old waveform API.

That said, if you continue to use the waveform API, I would switch to the
event method of notification. You'll need to do more bookkeeping to know
what to do when the event is set signaling the completion of an async
operation, but in my experience, that method is more performant than any of
the other callback methods.

I suggest that you post again your original question again in the multimedia
group

microsoft.public.win32.programmer.mmedia

and hope for a reply from Chris P. :)

Regards,
Will
 
T

Tom Widmer [VC++ MVP]

dc2000 said:
Hello everyone:


I am normally not an advocate of increasing the priority of a running
process or of a thread but it looks like I have to choice. I'm writing a
small app that plays a line-in signal via speakers using low level Digital
Audio API and as you can imagine it is quite time critical. The app works
fine when computer is idle but then even when I minimize a window (any
window), let alone do something more involving the play-back breaks with
crackling.

I came up with an idea to place the whole "low level Digital Audio" in a
thread and start it with the THREAD_PRIORITY_TIME_CRITICAL, but it didn't
really help much because the Base Priority wasn't set to the
THREAD_PRIORITY_TIME_CRITICAL due to Priority Class of the process still
being low. I truly don't want to change Priority Class of the app itself as
it will render the whole system slow.

That isn't true if your application is well behaved and only takes the
CPU it needs (assuming that is a relative small amount of CPU).
I thought to put the "low level Digital Audio" part into DLL but it sounds
like DLL is loaded into a process space it is called from, thus I won't be
able to increase Priority Class for the process either without affecting my
main app. Am I write?

What else would you gurus suggest? I don't want to start coding and go into
a wrong direction...

Are you allowed to have multiple buffers passed using waveInAddBuffer at
once? Obviously, that would get rid of stutter by providing you with a
cushion (and you could presumably have multiple calls to write going on
at once too?).

Anyway, the multimedia newsgroups would be better. With DirectSound, you
have a circular buffer as the source and sink with streaming audio, so
it is simple to provide a "cushion".

Tom
 
W

William DePalo [MVP VC++]

Tom Widmer said:
Are you allowed to have multiple buffers passed using waveInAddBuffer at
once?

For the benefit of the OP, the answer is 'yes, you are'.

Regards,
Will
 
G

Guest

Tom Widmer said:
That isn't true if your application is well behaved and only takes the
CPU it needs (assuming that is a relative small amount of CPU).


Are you allowed to have multiple buffers passed using waveInAddBuffer at
once? Obviously, that would get rid of stutter by providing you with a
cushion (and you could presumably have multiple calls to write going on
at once too?).

Anyway, the multimedia newsgroups would be better. With DirectSound, you
have a circular buffer as the source and sink with streaming audio, so
it is simple to provide a "cushion".

Tom


Tom, the reason I started to mess with the priority of the thread is because
all the sound drivers are executing with time critical priority. I thought
this would help, but it doesn't.

You ask if my app is "well behaved". Yes, it takes no more than 3% of CPU
time during play-back and most of the time it's showing 0%. It doesn't
"stutter" much, except the time when someone minimizes a window (any window).
Somehow that animation that XP has freezes all windows for a second or so,
and that's when the stutter occurs. I really don't know how to overcome it,
as obviously WMP doesn't stutter, but I almost gave up. (I used events for
synchronization but it doesn't help, either.)

My last idea, and I'd appreciate if you help me here, is to increase the
size of a single buffer for waveIn and waveOut. I'm currently calculating it
according to the sound format:

//Wfx is WAVEFORMATEX struct for the PCM recorded & played
UINT ncbSz = ((Wfx.nChannels * Wfx.wBitsPerSample * Wfx.nSamplesPerSec / 8)ncbSz &= ~0xf;

//2channels, 16bit, 96kHz => size is 0x5DC0
//2channels, 16bit, 44.1kHz => size is 0x2B10
//1channel, 8bit, 11.025kHz => size is 0x2B0

I have 32 cyclic buffers of that size and 3 are constantly in the driver's
buffer for recording and 3 for playback. (Of course, I perform some
synchronization in the callback function to keep the number of buffers in the
driver steady.)

Maybe I should increase the size of single buffer to something more than
that? That;s my only hope...
 
G

Guest

William DePalo said:
I see that Carl has already pointed out that there are other ways to go
besides the old waveform API.

That said, if you continue to use the waveform API, I would switch to the
event method of notification. You'll need to do more bookkeeping to know
what to do when the event is set signaling the completion of an async
operation, but in my experience, that method is more performant than any of
the other callback methods.

I suggest that you post again your original question again in the multimedia
group

microsoft.public.win32.programmer.mmedia

and hope for a reply from Chris P. :)

Regards,
Will


Thanks, I tried using events and it didn't help. I don't see any performance
boost, but maybe it's something I can't catch with a naked eye. Please read
my other response here after Tom Widmer's post.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top