Reading and Writing WAV Files in Python – Real Python

There’s an abundance of third-party tools and libraries for manipulating and analyzing audio WAV files in Python. At the same time, the language ships with the little-known wave module in its standard library, offering a quick and straightforward way to read and write such files. Knowing Python’s wave module can help you dip your toes into digital audio processing.If topics like audio analysis, sound editing, or music synthesis get you excited, then you’re in for a treat, as you’re about to get a taste of them!Although not required, you’ll get the most out of this tutorial if you’re familiar with NumPy and Matplotlib, which greatly simplify working with audio data. Additionally, knowing about numeric arrays in Python will help you better understand the underlying data representation in computer memory.Click the link below to access the bonus materials, where you’ll find sample audio files for practice, as well as the complete source code of all the examples demonstrated in this tutorial:You can also take the quiz to test your knowledge and see how much you’ve learned:Understand the WAV File Format
In the early nineties, Microsoft and IBM jointly developed the Waveform Audio File Format, often abbreviated as WAVE or WAV, which stems from the file’s extension (.wav). Despite its older age in computer terms, the format remains relevant today. There are several good reasons for its wide adoption, including:

Simplicity: The WAV file format has a straightforward structure, making it relatively uncomplicated to decode in software and understand by humans.
Portability: Many software systems and hardware platforms support the WAV file format as standard, making it suitable for data exchange.
High Fidelity: Because most WAV files contain raw, uncompressed audio data, they’re perfect for applications that require the highest possible sound quality, such as with music production or audio editing. On the flipside, WAV files take up significant storage space compared to lossy compression formats like MP3.

It’s worth noting that WAV files are specialized kinds of the Resource Interchange File Format (RIFF), which is a container format for audio and video streams. Other popular file formats based on RIFF include AVI and MIDI. RIFF itself is an extension of an even older IFF format originally developed by Electronic Arts to store video game resources.
Before diving in, you’ll deconstruct the WAV file format itself to better understand its structure and how it represents sounds. Feel free to jump ahead if you just want to see how to use the wave module in Python.
The Waveform Part of WAV
What you perceive as sound is a disturbance of pressure traveling through a physical medium, such as air or water. At the most fundamental level, every sound is a wave that you can describe using three attributes:

Amplitude is the measure of the sound wave’s strength, which you perceive as loudness.
Frequency is the reciprocal of the wavelength or the number of oscillations per second, which corresponds to the pitch.
Phase is the point in the wave cycle at which the wave starts, not registered by the human ear directly.

The word waveform, which appears in the WAV file format’s name, refers to the graphical depiction of the audio signal’s shape. If you’ve ever opened a sound file using audio editing software, such as Audacity, then you’ve likely seen a visualization of the file’s content that looked something like this:
Waveform in Audacity

That’s your audio waveform, illustrating how the amplitude changes over time.
The vertical axis represents the amplitude at any given point in time. The midpoint of the graph, which is a horizontal line passing through the center, represents the baseline amplitude or the point of silence. Any deviation from this equilibrium corresponds to a higher positive or negative amplitude, which you experience as a louder sound.
As you move from left to right along the graph’s horizontal scale, which is the timeline, you’re essentially moving forward in time through your audio track.
Having such a view can help you visually inspect the characteristics of your audio file. The series of the amplitude’s peaks and valleys reflect the volume changes. Therefore, you can leverage the waveform to identify parts where certain sounds occur or find quiet sections that may need editing.
Coming up next, you’ll learn how WAV files store these amplitude levels in digital form.
The Structure of a WAV File
The WAV audio file format is a binary format that exhibits the following structure on disk:
The Structure of a WAV File

As you can see, a WAV file begins with a header comprised of metadata, which describes how to interpret the sequence of audio frames that follow. Each frame consists of channels that correspond to loudspeakers, such as left and right or front and rear. In turn, every channel contains an encoded audio sample, which is a numeric value representing the relative amplitude level at a given point in time.
The most important parameters that you’ll find in a WAV header are:

Encoding: The digital representation of a sampled audio signal. Available encoding types include uncompressed linear Pulse-Code Modulation (PCM) and a few compressed formats like ADPCM, A-Law, or μ-Law.
Channels: The number of channels in each frame, which is usually equal to one for mono and two for stereo audio tracks, but could be more for surround sound recordings.
Frame Rate: The number of frames per second, also known as the sampling rate or the sampling frequency measured in hertz. It affects the range of representable frequencies, which has an impact on the perceived sound quality.
Bit Depth: The number of bits per audio sample, which determines the maximum number of distinctive amplitude levels. The higher the bit depth, the greater the dynamic range of the encoded signal, making subtle nuances in the sound audible.

Python’s wave module supports only the Pulse-Code Modulation (PCM) encoding, which is by far the most popular, meaning that you can usually ignore other formats. Moreover, Python is limited to integer data types, while PCM doesn’t stop there, defining several bit depths to choose from, including floating-point ones:

Data Type
Min Value
Max Value





≈ -3.40282 × 1038
≈ 3.40282 × 1038

≈ -1.79769 × 10308
≈ 1.79769 × 10308

In practice, the floating-point data types are overkill for most uses, as they require more storage while providing little return on investment. You probably won’t miss them unless you need extra dynamic range for truly professional sound editing.

Note: Despite dealing with an integer flavor of the PCM encoding, you’ll often want to represent the underlying audio samples as floating-point numbers internally in your source code. This will let you handle various bit depths uniformly and streamline the math behind audio processing.
After reading an audio sample into Python, you’ll typically normalize its value so that it always falls between -1.0 and 1.0 on the scale, regardless of the original range of PCM values. Then, before writing it back to a WAV file, you’ll convert and clamp the value to make it fit the desired range.

The 8-bit integer encoding is the only one relying on unsigned numbers. In contrast, all remaining data types allow both positive and negative sample values.
Another important detail you should take into account when reading audio samples is the byte order. The WAV file format specifies that multi-byte values are stored in little-endian or start with the least significant byte first. Fortunately, you don’t typically need to worry about it when you use the wave module to read or write audio data in Python. Still, there might be edge cases when you do!
The 8-bit, 16-bit, and 32-bit integers have standard representations in the C programming language, which the default CPython interpreter builds on. However, the 24-bit integer is an outlier without a corresponding built-in C data type. Exotic bit depths like that aren’t unheard of in music production, as they help strike a balance between size and quality. Later, you’ll learn how to emulate them in Python.
To faithfully represent music, most WAV files use stereo PCM encoding with 16-bit signed integers sampled at 44.1 kHz or 44,100 frames per second. These parameters correspond to the standard CD-quality audio. Coincidentally, such a sampling frequency is roughly double the highest frequency that most humans can hear. According to the Nyquist–Shannon sampling theorem, that’s sufficient to capture sounds in digital form without distortion.
Now that you know what’s inside a WAV file, it’s time to load one into Python!
Get to Know Python’s wave Module
The wave module takes care of reading and writing WAV files but is otherwise pretty minimal. It was implemented in about five hundred lines of pure Python code, not counting the comments. Perhaps most importantly, you can’t use it for audio playback. To play a sound in Python, you’ll need to install a separate library.

Note: Although the wave module itself doesn’t support audio playback, you can still listen to your WAV files as you work through this tutorial. Use the media player that came with your operating system or any third-party application such as VLC.

As mentioned earlier, the wave module only supports four integer-based, uncompressed PCM encoding bit depths:

8-bit unsigned integer
16-bit signed integer
24-bit signed integer
32-bit signed integer

To experiment with Python’s wave module, you can download the supporting materials, which include a few sample WAV files encoded in these formats.
Read WAV Metadata and Audio Frames
If you haven’t already grabbed the bonus materials, then you can download a sample recording of a Bongo drum directly from Wikimedia Commons to get started. It’s a mono audio sampled at 44.1 kHz and encoded in the 16-bit PCM format. The file is in the public domain, so you can freely use it for personal or commercial purposes without any restrictions.
To load this file into Python, import the wave module and call its open() function with a string indicating the path to your WAV file as the function’s argument:

When you don’t pass any additional arguments, the function, which is the only function that’s part of the module’s public interface, opens the given file for reading and returns a Wave_read instance. You can use that object to retrieve information stored in the WAV file’s header and read the encoded audio frames:

You can conveniently get all parameters of your WAV file into a named tuple with descriptive attributes like .nchannels or .framerate. Alternatively, you can call the individual methods, such as .getnchannels(), on the Wave_read object to cherry-pick the specific pieces of metadata that you’re interested in.
The underlying audio frames get exposed to you as an unprocessed bytes instance, which is a really long sequence of unsigned byte values. Unfortunately, you can’t do much beyond what you’ve seen here because the wave module merely returns the raw bytes without providing any help in their interpretation.
Your sample recording of the Bongo drum, which is less than five seconds long and only uses one channel, comprises nearly half a million bytes! To make sense of them, you must know the encoding format and manually decode those bytes into integer numbers.
According to the metadata you’ve just obtained, there’s only one channel in each frame, and each audio sample occupies two bytes or sixteen bits. Therefore, you can conclude that your file’s been encoded using the PCM format with 16-bit signed integers. Such a representation would correspond to the signed short data type in C, which doesn’t exist in Python.
While Python doesn’t directly support the required data type, you can use the array module to declare an array of signed short numbers and pass your bytes object as input. In this case, you want to use the lowercase letter “h” as the array’s type code to tell Python how to interpret the frames’ bytes:

Notice that the resulting array has exactly half the number of elements than your original sequence of bytes. That’s because each number in the array represents a 16-bit or two-byte audio sample.
Another option at your fingertips is the struct module, which lets you unpack a sequence of bytes into a tuple of numbers according to the specified format string:

The less-than symbol (<) in the format string explicitly indicates little-endian as the byte order of each two-byte audio sample (h). In contrast, an array implicitly assumes your platform’s byte order, which means that you may need to call its .byteswap() method when necessary.
Finally, you can use NumPy as an efficient and powerful alternative to Python’s standard library modules, especially if you already work with numerical data:

By using a NumPy array to store your PCM samples, you can leverage its element-wise vectorized operations to normalize the amplitude of the encoded audio signal. The highest magnitude of an amplitude stored on a signed short integer is negative 32,768 or -215. Dividing each sample by 215 scales them to a right-open interval between -1.0 and 1.0, which is convenient for audio processing tasks.
That brings you to a point where you can finally start performing interesting tasks on your audio data in Python, such as plotting the waveform or applying special effects. Before you do, however, you should learn how to save your work to a WAV file.
Write Your First WAV File in Python
Knowing how to use the wave module in Python opens up exciting possibilities, such as sound synthesis. Wouldn’t it be great to compose your own music or sound effects from scratch and listen to them? Now, you can do just that!
Mathematically, you can represent any complex sound as the sum of sufficiently many sinusoidal waves of different frequencies, amplitudes, and phases. By mixing them in the right proportions, you can recreate the unique timbre of different musical instruments playing the same note. Later, you may combine a few musical notes into chords and use them to form interesting melodies.
Here’s the general formula for calculating the amplitude A(t) at time instant t of a sine wave with frequency f and phase shift φ, whose maximum amplitude is A:

When you scale your amplitudes to a range of values between -1.0 and 1.0, then you can disregard the A factor in the equation. You can also omit the φ term because the phase shift is usually irrelevant to your applications.
Go ahead and create a Python script named with the following helper function, which implements the formula above:

First, you import Python’s math module to access the sin() function and then specify a constant with the frame rate for your audio, defaulting to 44.1 kHz. The helper function, sound_wave(), takes the frequency in hertz and duration in seconds of the expected wave as parameters. Based on them, it calculates the 8-bit unsigned integer PCM samples of the corresponding sine wave and yields them to the caller.
To determine the time instant before plugging it into the formula, you divide the current frame number by the frame rate, which gives you the current time in seconds. Then, you calculate the amplitude at that point using the simplified math formula you saw earlier. Finally, you shift, scale, and clip the amplitude to fit the range of 0 to 255, corresponding to an 8-bit unsigned integer PCM audio sample.
You can now synthesize, say, a pure sound of the musical note A by generating a sine wave with a frequency of 440 Hz lasting for two and a half seconds. Then, you can use the wave module to save the resulting PCM audio samples to a WAV file:

You start by adding the necessary import statement and call the function with the mode parameter equal to the string literal “wb”, which stands for writing in binary mode. In that case, the function returns a Wave_write object. Note that Python always opens your WAV files in binary mode even if you don’t explicitly use the letter b in the mode parameter’s value.
Next, you set the number of channels to one, which represents mono audio, and the sample width to one byte or eight bits, which corresponds to PCM encoding with 8-bit unsigned integer samples. You also pass your frame rate stored in the constant. Lastly, you convert the computed PCM audio samples into a sequence of bytes before writing them to the file.
Now that you understand what the code does, you can go ahead and run the script:

Congratulations! You’ve successfully synthesized your first sound in Python. Try playing it back in a media player to hear the result. Next, you’ll spice it up a little.
Mix and Save Stereo Audio
Producing mono audio is a great starting point. However, saving a stereo signal in the WAV file format gets more tricky because you have to interleave the audio samples from the left and right channels in each frame. To do so, you can modify your existing code as shown below or place it in a brand new Python file, for example, one named

The highlighted lines represent the necessary changes. First, you import the itertools module so that you can chain() the zipped and unpacked pairs of audio samples from both channels on line 15. Note that it’s one of many ways to implement the channel interleave. While the left channel stays as before, the right one becomes another sound wave with a slightly different frequency. Both have equal lengths of two and a half seconds.
You also update the number of channels in the file’s metadata accordingly and convert the stereo frames to raw bytes. When played back, your generated audio file should resemble the sound of a ringing tone used by most phones in North America. You can simulate other tones from various countries by adjusting the two sound frequencies, 440 Hz and 480 Hz. To use additional frequencies, you may need more than two channels.

Note: Reducing your frame rate to something like 8 kHz will make the ringing tone sound even more plausible by mimicking the limited frequency range of a typical telephone line.

Alternatively, instead of allocating separate channels for your sound waves, you can mix them together to create interesting effects. For example, two sound waves with very close frequencies produce a beating interference pattern. You might have experienced this phenomenon firsthand when traveling on an airplane because the jet engines on both sides never rotate at exactly the same speeds. This creates a distinctive pulsating sound in the cabin.
Mixing two sound waves boils down to adding their amplitudes. Just make sure to clamp their sum afterward so that it doesn’t exceed the available amplitude range to avoid distortion:

Here, you’re back to mono sound for a moment. After generating two sound waves on lines 9 and 10, you add their corresponding amplitudes on the following line. They’ll sometimes cancel out each other and, at other times, amplify the overall sound. The built-in min() and max() functions help you keep the resulting amplitude between -1.0 and 1.0 at all times.
Play around with the script above by increasing or decreasing the distance between both frequencies to observe how it affects the resulting beating rhythm.
Encode With Higher Bit Depths
So far, you’ve been representing every audio sample with a single byte or eight bits to keep things simple. That gave you 256 distinct amplitude levels, which was decent enough for your needs. However, you’ll want to bump up your bit depth sooner or later to achieve a much greater dynamic range and better sound quality. This comes at a cost, though, and it’s not just additional memory.
To use one of the multi-byte PCM encodings, such as the 16-bit one, you’ll need to consider the conversion between a Python int and a suitable byte representation. It involves handling the byte order and sign bit, in particular. Python gives you a few tools to help with that, which you’ll explore now:

bytearray and int.to_bytes()

When switching to a higher bit depth, you must adjust the scaling and byte conversion accordingly. For example, to use 16-bit signed integers, you can use the array module and make the following tweaks in your script:

Here’s a quick breakdown of the most important pieces:

Line 11 scales and clamps the calculated amplitude to the 16-bit signed integer range, which extends from -32,768 to 32,767. Note the asymmetry of the extreme values.
Lines 16 to 19 define an array of signed short integers and fill it with the interleaved samples from both channels in a loop.
Line 23 sets the sample width to two bytes, which corresponds to the 16-bit PCM encoding.
Line 25 converts the array to a sequence of bytes and writes them to the file.

Recall that Python’s array relies on your computer’s native byte order. To have more granular control over such technical details, you can use an alternative solution based on a bytearray:

Again, the highlighted lines indicate the most essential bits:

Line 13 defines a partial function based on the int.to_bytes() method with some of its attributes set to fixed values. The partial function takes a Python int and converts it to a suitable byte representation required by the WAV file format. In this case, it explicitly allocates two bytes with the little-endian byte order and a sign bit for each input value.
Line 18 creates an empty bytearray, which is a mutable equivalent of the bytes object. It’s like a list but for Python’s unsigned bytes. The following lines keep extending the bytearray with the encoded bytes while iterating over your PCM audio samples.
Line 27 passes the bytearray with your frames directly to the WAV file for writing.

Finally, you can use NumPy to elegantly express the sound wave equation and handle byte conversion efficiently:

Thanks to NumPy’s vectorized operations, you eliminate the need for explicit looping in your sound_wave() function. On line 7, you generate an array of time instants measured in seconds and then pass that array as input to the np.sin() function, which computes the corresponding amplitude values. Later, on line 17, you stack and interleave the two channels, creating the stereo signal ready to be written into your WAV file.
The conversion of integers to 16-bit PCM samples gets accomplished by calling NumPy’s .astype() method with the string literal “<h” as an argument, which is the same as np.int16 on little-endian platforms. However, before doing that, you must scale and clip your amplitude values to prevent NumPy from silently overflowing or underflowing, which could cause audible clicks in your WAV file.
Doesn’t this code look more compact and readable than the other two versions? From now on, you’ll use NumPy in the remaining part of this tutorial.
That said, encoding the PCM samples by hand gets pretty cumbersome because of the necessary amplitude scaling and byte conversion involved. Python’s lack of signed bytes sometimes makes it even more challenging. Anyway, you still haven’t addressed the 24-bit PCM encoding, which requires special handling. In the next section, you’ll streamline this process by wrapping it in a convenient abstraction.
Decipher the PCM-Encoded Audio Samples
This part is going to get slightly more advanced, but it’ll make working with WAV files in Python much more approachable in the long run. Afterward, you’ll be able to build all kinds of cool audio-related applications, which you’ll explore in detail over the next few sections. Along the way, you’ll leverage modern Python features, such as enumerations, data classes, and pattern matching.
By the end of this tutorial, you should have a custom Python package called waveio consisting of the following modules:


The encoding module will be responsible for the two-way conversion between normalized amplitude values and PCM-encoded samples. The metadata module will represent the WAV file header, reader will facilitate reading and interpreting audio frames, and writer will allow for the creation of WAV files.
If you don’t want to implement all of this from scratch or if you get stuck at any point, then feel free to download and use the supporting materials. Additionally, these materials include several WAV files encoded differently, which you can use for testing:

With that out of the way, it’s time to start coding!
Enumerate the Encoding Formats
Use your favorite IDE or code editor to create a new Python package and name it waveio. Then, define the encoding module inside your package and populate it with the following code:

The PCMEncoding class extends IntEnum, which is a special kind of enumeration that combines the Enum base class with Python’s built-in int data type. As a result, each member of this enumeration becomes synonymous with an integer value, which you can directly use in logical and arithmetic expressions:

This will become useful for finding the number of bits and the range of values supported by an encoding.
The values 1 through 4 in your enumeration represent the number of bytes occupied by a single audio sample in each encoding format. For instance, the SIGNED_16 member has a value of two because the corresponding 16-bit PCM encoding uses exactly two bytes per audio sample. Thanks to that, you can leverage the sampwidth field of a WAV file’s header to quickly instantiate a suitable encoding:

Passing an integer value representing the desired sample width through the enumeration’s constructor returns the right encoding instance.
Once you determine the encoding of a particular WAV file, you’ll want to use your PCMEncoding instance to decode the binary audio frames. Before you do, however, you’ll need to know the minimum and maximum values of audio samples encoded with the given format so that you can correctly scale them to floating-point amplitudes for further processing. To do so, define these properties in your enumeration type:

To find the maximum value representable on the current encoding, you first check if self, which signifies the number of bytes per sample, compares equal to one. When it does, it means that you’re dealing with an 8-bit unsigned integer whose maximum value is 255. Otherwise, the maximum value of a signed integer is the negative of its minimum value minus one.

Note: This stems from the asymmetry of the binary two’s complement representation of integers in computers. For example, if the minimum value of a signed short is equal to -32,768, then its maximum value must be 32,767.

The minimum value of an unsigned integer is always zero. On the other hand, finding the minimum value of a signed integer requires the knowledge of the number of bits per sample. You calculate it in yet another property by multiplying the number of bytes, which is represented by self, by the eight bits in each byte. Then, you take the negative of two raised to the power of the number of bits minus one.
Note that to fit the code of each of the .max and .min properties on a single line, you use Python’s conditional expression, which is the equivalent of the ternary conditional operator in other programming languages.
Now, you have all the building blocks required to decode audio frames into numeric amplitudes. At this point, you can turn your raw bytes loaded from a WAV file into meaningful numeric amplitudes in your Python code.
Convert Audio Frames to Amplitudes
You’ll use NumPy to streamline your code and make the decoding of PCM samples more performant than with pure Python. Go ahead and add a new method to your PCMEncoding class, which will handle all four encoding formats that Python understands:

The new .decode() method takes a bytes-like object representing the raw audio frames loaded from a WAV file as an argument. Regardless of the number of audio channels in each frame, you can treat the frames variable as a long sequence of bytes to interpret.
To handle the different encodings, you branch out your code using structural pattern matching with the help of the match and case soft keywords introduced in Python 3.10. When none of your enumeration members match the current encoding, you raise a TypeError exception with a suitable message.
Also, you use the Ellipsis (…) symbol as a placeholder in each branch to prevent Python from raising a syntax error due to an empty code block. Alternatively, you could’ve used the pass statement to achieve a similar effect. Soon, you’ll fill those placeholders with code tailored to handling the relevant encodings.
Start by covering the 8-bit unsigned integer PCM encoding in your first branch:

In this code branch, you turn the binary frames into a one-dimensional NumPy array of signed floating-point amplitudes ranging from -1.0 to 1.0. By calling np.frombuffer() with its second positional parameter (dtype) set to the string “u1”, you tell NumPy to interpret the underlying values as one-byte long unsigned integers. Then, you normalize and shift the decoded PCM samples, which causes the result to become an array of floating-point values.

Note: NumPy offers several ways to declare the same data type. Some of these alternatives are more flexible than others but use different notations, which can become confusing. For example, to specify a value of the signed short data type in C, you’d choose one of the following:

Built-in Alias: np.short or np.int16
Character Code: “h”
Type String: “i2”

Note that np.int16 represents a signed integer on 16 bits. On the other hand, the “i2” string refers to a signed integer on two bytes rather than bits. Moreover, you can only specify the intended byte order using the last two methods.
When in doubt, use the last notation to be explicit. For example, the string “<i2” means a two-byte signed integer in little-endian byte order.

You can now give your code a spin to verify it’s working as intended. Make sure that you’ve downloaded the sample WAV files before proceeding, and adjust the path below as necessary:

You open the 8-bit mono file sampled at 44.1 kHz. After reading its metadata from the file’s header and loading all audio frames into memory, you instantiate your PCMEncoding class and call its .decode() method on the frames. As a result, you get a NumPy array of scaled amplitude values.
Decoding the 16-bit and 32-bit signed integer samples works similarly, so you can implement both in one step:

This time, you normalize the samples by dividing them by their corresponding maximum magnitude without any offset correction, as signed integers are already centered around zero. However, there’s a slight asymmetry because the minimum has a greater absolute value than the maximum, which is why you take the negative minimum instead of the maximum as a scaling factor. This results in a right-open interval from -1.0 inclusive to 1.0 exclusive.
Finally, it’s time to address the elephant in the room: decoding audio samples represented as 24-bit signed integers.
Interpret the 24-Bit Depth of Audio
One way to tackle the decoding of 24-bit signed integers is by repeatedly calling Python’s int.from_bytes() on the consecutive triplets of bytes:

First, you create a generator expression that iterates over the byte stream in steps of three, corresponding to the three bytes of each audio sample. You convert such a byte triplet to a Python int, specifying the byte order and the sign bit’s interpretation. Next, you pass your generator expression to NumPy’s np.fromiter() function and use the same scaling technique as before.
It gets the job done and looks fairly readable but may be too slow for most practical purposes. Here’s one of many equivalent implementations based on NumPy, which is significantly more efficient:

The trick here is to reshape your flat array of bytes into a two-dimensional matrix comprising three columns, each representing the consecutive bytes of a sample. When you specify -1 as one of the dimensions, NumPy automatically calculates the size of that dimension based on the length of the array and the size of the other dimension.
Next, you pad your matrix on the right by appending the fourth column filled with zeros. After that, you reshape the matrix again by flattening it into another sequence of bytes, with an extra zero for every fourth element.
Finally, you reinterpret the bytes as 32-bit signed integers (“<i4”), which is the smallest integer type available in NumPy that can accommodate your 24-bit audio samples. However, before normalizing the reconstructed amplitudes, you must take care of the sign bit, which is currently in the wrong place due to padding the number with an extra byte. The sign bit should always be the most significant bit.
Rather than performing complicated bitwise operations, you can take advantage of the fact that a misplaced sign bit will cause the value to overflow when switched on. You detect that by checking if the value is greater than self.max, and if so, you add twice the minimum value to correct the sign. This step effectively moves the sign bit to its correct position. Afterward, you normalize the samples as before.
To verify if your decoding code works as expected, use the sample WAV files included in the bonus materials. You can compare the resulting amplitude values of a sound encoded with different bit depths:

Right off the bat, you can tell that all four files represent the same sound despite using different bit depths. Their amplitudes are strikingly similar, with only slight variations due to varying precision. The 8-bit PCM encoding is noticeably more imprecise than the rest but still captures the overall shape of the same sound wave.
To keep the promise that your encoding module and its PCMEncoding class carry in their names, you should implement the encoding part of this two-way conversion.
Encode Amplitudes as Audio Frames
Add the .encode() method to your class now. It’ll take the normalized amplitudes as an argument and return a bytes instance with the encoded audio frames ready for writing to a WAV file. Here’s the method’s scaffolding, which mirrors the .decode() counterpart you implemented earlier:

Each code branch will reverse the steps you followed before when decoding audio frames expressed in the corresponding format.
However, to encode your processed amplitudes into a binary format, you’ll not only need to implement scaling based on the minimum and maximum values, but also clamping to keep them within the allowed range of PCM values. When using NumPy, you can call np.clip() to do the work for you:

You wrap the call to np.clip() in a non-public method named ._clamp(), which expects the PCM audio samples as its only argument. The minimum and maximum values of the corresponding encoding are determined by your earlier properties, which the method delegates to.
Below are the steps for encoding the three standard formats, 8-bit, 16-bit, and 32-bit, which share common logic:

In each case, you scale your amplitudes so that they use the entire range of PCM values. Additionally, for the 8-bit encoding, you shift that range to get rid of the negative part. Then, you clamp and convert the scaled samples to the appropriate byte representation. This is necessary because scaling may result in values outside the allowed range.
Producing PCM values in the 24-bit encoding format is slightly more involved. As before, you can use either pure Python or NumPy’s array reshaping to help align the data correctly. Here’s the first version:

You use another generator expression, which iterates over the scaled and clamped samples. Each sample gets rounded to an integer value and converted to a bytes instance with a call to int.to_bytes(). Then, you pass your generator expression into the .join() method of an empty bytes literal to concatenate the individual bytes into a longer sequence.
And, here’s the optimized version of the same task based on NumPy:

After scaling and clamping the input amplitudes, you make a memory view over your array of integers to interpret them as a sequence of unsigned bytes. Next, you reshape it into a matrix consisting of four columns and disregard the last column with the slicing syntax before flattening the array.
Now that you can decode and encode audio samples using various bit depths, it’s time to put your new skill into practice.
Visualize Audio Samples as a Waveform
In this section, you’ll have an opportunity to implement the metadata and reader modules in your custom waveio package. When you combine them with the encoding module that you built previously, you’ll be able to plot a graph of the audio content persisted in a WAV file.
Encapsulate WAV File’s Metadata
Managing multiple parameters comprising the WAV file’s metadata can be cumbersome. To make your life a tad bit easier, you can group them under a common namespace by defining a custom data class like the one below:

This data class is marked as frozen, which means that you won’t be able to change the values of the individual attributes once you create a new instance of WAVMetadata. In other words, objects of this class are immutable. This is a good thing because it prevents you from accidentally modifying the object’s state, ensuring consistency throughout your program.
The WAVMetadata class lists four attributes, including your very own PCMEncoding based on the sampwidth field from the WAV file’s header. The frame rate is expressed as a Python float because the wave module accepts fractional values, which it then rounds before writing to the file. The remaining two attributes are integers representing the numbers of channels and frames, respectively.
Note that using a default value of None and the union type declaration makes the number of frames optional. This is so that you can use your WAVMetadata objects when writing an indeterminate stream of audio frames without knowing their total number in advance. It’ll become helpful in downloading an online radio stream and saving it to a WAV file later in this tutorial.
Computers can easily process discrete audio frames, but humans naturally understand sound as a continuous flow over time. So, it’s more convenient to think of audio duration in terms of the number of seconds instead of frames. You can calculate the duration in seconds by dividing the total number of frames by the number of frames per second:

By defining a property, you’ll be able to access the number of seconds as if it were just another attribute in the WAVMetadata class.
Knowing how the audio frames translate to seconds lets you visualize the sound in the time domain. However, before plotting a waveform, you must load it from a WAV file first. Instead of using the wave module directly like before, you’re going to rely on another abstraction that you’ll build next.
Load All Audio Frames Eagerly
The relative simplicity of the wave module in Python makes it an accessible gateway to sound analysis, synthesis, and editing. Unfortunately, by exposing the low-level intricacies of the WAV file format, it makes you solely responsible for manipulating raw binary data. This can quickly become overwhelming even before you get to solving more complex audio processing tasks.
To help with that, you can build a custom adapter that will wrap the wave module, hiding the technical details of working with WAV files. It’ll let you access and interpret the underlying sound amplitudes in a more user-friendly way.
Go ahead and create another module named reader in your waveio package, and define the following WAVReader class in it:

The class initializer method takes a path argument, which can be a plain string or a pathlib.Path instance. It then opens the corresponding file for reading in binary mode using the wave module, and instantiates WAVMetadata along with PCMEncoding.
The two special methods, .__enter__() and .__exit__(), work in tandem by performing the setup and cleanup actions associated with the WAV file. They make your class a context manager, so you can instantiate it through the with statement:

The .__enter__() method returns the newly created WAVReader instance while .__exit__() ensures that your WAV file gets properly closed before leaving the current block of code.
With this new class in place, you can conveniently access the WAV file’s metadata along with the PCM encoding format. In turn, this lets you convert the raw bytes into a long sequence of numeric amplitude levels.
When dealing with relatively small files, such as the recording of a bicycle bell, it’s okay to eagerly load all frames into memory and have them converted into a one-dimensional NumPy array of amplitudes. To do so, you can implement the following helper method in your class:

Because calling .readframes() on a Wave_read instance moves the internal pointer forward, you call .rewind() to ensure that you’ll read the file from the beginning in case you called your method more than once. Then, you decode the audio frames to a flat array of amplitudes using the appropriate encoding.
Calling this method will produce a NumPy array composed of floating-point numbers similar to the following example:

However, when processing audio signals, it’s usually more convenient to look at your data as a sequence of frames or channels rather than individual amplitude samples. Fortunately, depending on your needs, you can quickly reshape your one-dimensional NumPy array into a suitable two-dimensional matrix of frames or channels.
You’re going to take advantage of a reusable custom decorator to arrange the amplitudes in rows or columns:

You wrap the call to your internal ._read() method in a cached property named .frames, so that you read the WAV file once at most when you first access that property. The next time you access it, you’ll reuse the value remembered in the cache. While your second property, .channels, delegates to the first one, it applies a different parameter value to your @reshape decorator.
You can now add the following definition of the missing decorator:

It’s a parameterized decorator factory, which takes a string named shape, whose value can be either “rows” or “columns”. Depending on the supplied value, it reshapes the NumPy array returned by the wrapped method into a sequence of frames or channels. You use minus one as the size of the first dimension to let NumPy derive it from the number of channels and the array length. For columnar format, you transpose the matrix.
To see this in action and notice the difference in the amplitude arrangement in the array, you can run the following code snippet in the Python REPL:

Because this is a stereo audio, each item in .frames consists of a pair of amplitudes corresponding to the left and right channels. On the other hand, the individual .channels comprise a sequence of amplitudes for their respective sides, letting you isolate and process them independently if needed.
Great! You’re now ready to visualize the complete waveform of each channel in your WAV files. Before moving on, you may want to add the following lines to the file in your waveio package:

The first will let you import the WAVReader class directly from the package, skipping the intermediate reader module. The special variable __all__ holds a list of names available in a wildcard import.
Plot a Static Waveform Using Matplotlib
In this section, you’ll combine the pieces together to visually represent a waveform of a WAV file. By building on top of your waveio abstractions and leveraging the Matplotlib library, you’ll be able to create graphs just like this one:
Stereo Waveforms of a Bicycle Bell Sound

It’s a static depiction of the entire waveform, which lets you study the audio signal’s amplitude variations over time.
If you haven’t already, install Matplotlib into your virtual environment and create a Python script named outside of the waveio package. Next, paste the following source code into your new script:

You use the argparse module to read your script’s arguments from the command line. Currently, the script expects only one positional argument, which is the path pointing to a WAV file somewhere on your disk. You use the pathlib.Path data type to represent that file path.
Following the name-main idiom, you call your main() function, which is the script’s entry point, at the bottom of the file. After parsing the command-line arguments and opening the corresponding WAV file with your WAVReader object, you can proceed and plot its audio content.
The plot() function performs these steps:

Lines 19 to 24 create a figure with subplots corresponding to each channel in the WAV file. The number of channels determines the number of rows, while there’s only a single column in the figure. Each subplot shares the horizontal axis to align the waveforms when zooming or panning.
Lines 26 and 27 ensure that the ax variable is a sequence of Axes by wrapping that variable in a list when there’s only one channel in the WAV file.
Lines 29 to 32 loop through the audio channels, setting the title of each subplot to the corresponding channel number and using fixed y-ticks for consistent scaling. Finally, they plot the waveform for the current channel on its respective Axes object.
Lines 34 to 36 set the window title, make the subplots fit nicely in the figure, and then display the plot on the screen.

To show the number of seconds on the horizontal axis shared by the subplots, you need to make a few tweaks:

By calling NumPy’s linspace() on lines 24 to 28, you calculate the timeline comprising evenly distributed time instants measured in seconds. The number of these time instants equals the number of audio frames, letting you plot them together on line 34. As a result, the number of seconds on the horizontal axis matches the respective amplitude values.
Additionally, you define a custom formatter function for your timeline and hook it up to the shared horizontal axis. Your function implemented on lines 40 to 44 makes the tick labels show the time unit when needed. To only show significant digits, you use the letter g as the format specifier in the f-strings.
As the icing on the cake, you can apply one of the available styles that come with Matplotlib to make the resulting plot look more appealing:

In this case, you pick a style sheet inspired by the visualizations found on the FiveThirtyEight website, which specializes in opinion poll analysis. If this style is unavailable, then you can fall back on Matplotlib’s default style.
Go ahead and test your plotting script against mono, stereo, and potentially even surround sound files of varying lengths.
Once you open a larger WAV file, the amount of information will make it difficult to examine fine details. While the Matplotlib user interface provides zooming and panning tools, it can be quicker to slice audio frames to the time range of interest before plotting. Later, you’ll use this technique to animate your visualizations!
Read a Slice of Audio Frames
If you have a particularly long audio file, then you can reduce the time it takes to load and decode the underlying data by skipping and narrowing down the range of audio frames of interest:
A Slice of the Bongo Drum Waveform

This waveform starts at three and a half seconds, and lasts about a hundred and fifty milliseconds. Now you’re about to implement this slicing feature.
Modify your plotting script to accept two optional arguments, marking the start and end of a timeline slice. Both should be expressed in seconds, which you’ll later translate to audio frame indices in the WAV file:

When you don’t specify the start time with -s or –start, then your script assumes you want to begin reading audio frames from the very beginning at zero seconds. On the other hand, the default value for the -e or –end argument is equal to None, which you’ll treat as the total duration of the entire file.

Note: You can supply negative values for either or both arguments to indicate offsets from the end of the WAV file. For example, –start -0.5 would reveal the last half a second of the waveform.

Next, you’ll want to plot the waveforms of all audio channels sliced to the specified time range. You’ll implement the slicing logic inside a new method in your WAVReader class, which you can call now:

You’ve essentially replaced a reference to the .channels property with a call to the .channels_sliced() method and passed your new command-line arguments—or their default values—to it. When you omit the –start and –end parameters, your script should work as before, plotting the entire waveform.
As you plot a slice of audio frames within your channels, you’ll also want to match the timeline with the size of that slice. In particular, your timeline may no longer start at zero seconds, and it may end earlier than the full duration of the file. To adjust for that, you can attach a range() of frame indices to your channels and use it to calculate the new timeline:

Here, you convert integer indices of audio frames to their corresponding time instants in seconds from the beginning of the file. At this point, you’re done with editing the plotting script. It’s time to update your waveio.reader module now.
To allow for reading the WAV file from an arbitrary frame index, you can take advantage of the wave.Wave_read object’s .setpos() method instead of rewinding to the file’s beginning. Go ahead and replace .rewind() with .setpos(), add a new parameter, start_frame, to your internal ._read() method, and give it a default value of None:

When you call ._read() with some value for this new parameter, you’ll move the internal pointer to that position. Otherwise, you’ll commence from the last known position, which means that the function won’t rewind the pointer for you anymore. Therefore, you must explicitly pass zero as the starting frame when you read all frames eagerly in the corresponding property.
Now, define the .channels_sliced() method that you called before in your plotting script:

This method accepts two optional parameters, which represent the start and end of the WAV file in seconds. When you don’t specify the end one, it defaults to the total duration of the file.
Inside that method, you create a slice() object of the corresponding frame indices. To account for negative values, you expand the slice into a range() object by providing the total number of frames. Then, you use that range to read and decode a slice of audio frames into amplitude values.
Before returning from the function, you combine the sliced amplitudes and the corresponding range of frame indices in a custom wrapper for the NumPy array. This wrapper will behave just like a regular array, but it’ll also expose the .frames_range attribute that you can use to calculate the correct timeline for plotting.
This is the wrapper’s implementation that you can add to the reader module before your WAVReader class definition:

Whenever you try to access one of NumPy array’s attributes on your wrapper, the .__getattr__() method delegates to the .values field, which is the original array of amplitude levels. The .__iter__() method makes it possible to loop over your wrapper.
Unfortunately, you also need to override the array’s .reshape() method and the .T property, which you use in your @reshape decorator. That’s because they return plain NumPy arrays instead of your wrapper, effectively erasing the extra information about the range of frame indices. You can address this problem by wrapping their results again.
Now, you can zoom in on a particular slice of audio frames in all channels by supplying the –start and –end parameters:

The above command plots the waveform that you saw in the screenshot at the beginning of this section.
Since you know how to load a fraction of audio frames at a time, you can read huge WAV files or even online audio streams incrementally. This enables animating a live preview of the sound in the time and frequency domains, which you’ll do next.
Process Large WAV Files in Python Efficiently
Because WAV files typically contain uncompressed data, it’s not uncommon for them to reach considerable sizes. This can make their processing extremely slow or even prevent you from fitting the entire file into memory at once.
In this part of the tutorial, you’ll read a relatively big WAV file in chunks using lazy evaluation to improve memory use efficiency. Additionally, you’ll write a continuous stream of audio frames sourced from an Internet radio station to a local WAV file.
For testing purposes, you can grab a large enough WAV file from an artist who goes by the name Waesto on various social media platforms:

Hi! I’m Waesto, a music producer creating melodic music for content creators to use here on YouTube or other social media. You may use the music for free as long as I am credited in the description! (Source)

To download Waesto’s music in the WAV file format, you need to subscribe to the artist’s YouTube channel or follow them on other social media. Their music is generally copyrighted but free to use as long as it’s credited appropriately.
You’ll find a direct download link and the credits in each video description on YouTube. For example, one of the artist’s early songs, entitled Sleepless, provides a link to the corresponding WAV file hosted on the Hypeddit platform. It’s a 24-bit uncompressed PCM stereo audio sampled at 44.1 kHz, which weighs in at about forty-six megabytes.
Feel free to use any of Waesto’s tracks or a completely different WAV file that appeals to you. Just remember that it must be a large one.
Animate the Waveform Graph in Real Time
Instead of plotting a static waveform of the whole or a part of a WAV file, you can use the sliding window technique to visualize a small segment of the audio as it plays. This will create an interesting oscilloscope effect by updating the plot in real-time:

Animating the Waveform With the Oscilloscope Effect

Such a dynamic representation of sound is an example of music visualization akin to the visual effects you’d find in the classic Winamp player.
You don’t need to modify your WAVReader object, which already provides the means necessary to jump to a given position in a WAV file. As earlier, you’ll create a new script file to handle the visualization. Name the script and fill it with the source code below:

The overall structure of the script remains analogous to the one you created in the previous section. You start by parsing the command-line arguments including the path to a WAV file and an optional sliding window’s duration, which defaults to fifty milliseconds. The shorter the window, the fewer amplitudes will show up on the screen. At the same time, the animation will become smoother by refreshing each frame more often.

Note: There are two placeholder functions, animate() and slide_window(), which you’ll implement soon.

After opening the specified WAV file for reading, you call animate() with the filename, the window’s duration, and a lazily-evaluated sequence of windows obtained from slide_window(), which is a generator function:

A single window is nothing more than a slice of audio frames that you learned to read earlier. Given the total number of seconds in the file and the window’s duration, this function calculates the window count and iterates over a range of indices. For each window, it determines where it begins and ends on the timeline to slice the audio frames appropriately. Finally, it yields the average amplitude from all channels at the given time instant.
The animation part consumes your generator of windows and plots each of them with a slight delay:

First, you pick Matplotlib’s theme with a dark background for a more dramatic visual effect. Then, you set up the figure and an Axes object, remove the default border around your visualization, and iterate over the windows. On each iteration, you clear the plot, hide the ticks, and fix the vertical scale to keep it consistent across all windows. After plotting the current window, you pause the loop for the duration of a single window.
Here’s how you can start the animation from the command line to visualize the downloaded WAV file with your favorite music:

The oscilloscope effect can give you insight into the temporal dynamics of an audio file in the time domain. Next up, you’ll reuse most of this code to show spectral changes in the frequency domain of each window.
Show a Real-Time Spectrogram Visualization
While a waveform tells you about the amplitude changes over time, a spectrogram can provide a visual representation of how the intensities of different frequency bands vary with time. Media players often include a real-time visualization of the music, which looks similar to the one below:

Animating the Spectrogram of a WAV File

The width of each vertical bar corresponds to a range of frequencies or a frequency band, with its height depicting the relative energy level within that band at any given moment. Frequencies increase from left to right, with lower frequencies represented on the left side of the spectrum and higher frequencies toward the right.

Note: To achieve such an engaging effect in Python, you’ll need to calculate the short-term Fourier transform, which is a sequence of Discrete Fourier Transforms (DFT) computed separately for each audio segment enclosed by the sliding window. You’ll use NumPy to leverage the Fast Fourier Transform (FFT) algorithm for improved performance.
That said, calculating FFT remains a computationally demanding task. So, if your animation ends up looking choppy, then try to reduce the number of frequency bands by decreasing the sliding window’s size.

Now copy the entire source code from and paste it into a new script named, which you’ll modify to create a new visualization of the WAV file.
Because you’ll be calculating the FFT of short audio segments, you’ll want to overlap adjacent segments to minimize the spectral leakage caused by abrupt discontinuities at the edges. They introduce phantom frequencies into the spectrum, which don’t exist in the actual signal. An overlay of fifty percent is a good starting point, but you can make it configurable through another command-line argument:

The –overlap argument’s value must be an integer number between zero inclusive and one hundred exclusive, representing a percentage. The greater the overlap, the smoother the animation will appear.

Note: Unlike the waveform, a spectrum visualization is much more resource-intensive to compute, so you’ll have to reduce the window duration to as little as a few milliseconds! That’s why you’ve adjusted the default value for the –seconds argument.

You can now modify your slide_window() function to accept that overlap percentage as an additional parameter:

Instead of moving the window by its whole duration as before, you introduce a step that can be smaller, resulting in a greater number of windows in total. On the other hand, when the overlap percentage is zero, you arrange the windows next to each other without any overlap between them.
You can now pass the overlap requested at the command line to your generator function, as well as the animate() function:

You’ll need to know the overlap to adjust the animation’s speed accordingly. Notice that you wrap the call to slide_window() with another call to a new function, fft(), which calculates the frequencies and their corresponding magnitudes in the sliding window:

There’s a lot going on, so it’ll help to break this function down line by line:

Line 4 calculates the sampling period, which is the time interval between audio samples in the file, as the reciprocal of the frame rate.
Line 6 uses your window size and the sampling period to determine the frequencies in the audio. Note that the highest frequency in the spectrum will be exactly half the frame rate due to the Nyquist–Shannon theorem mentioned earlier.
Lines 7 to 11 find the magnitudes of each frequency determined on line 6. Before performing the Fourier transform of the wave amplitudes, you subtract the mean value, often referred to as the DC bias, which is the non-varying part of the signal representing the zero-frequency term in the Fourier transform. Additionally, you apply a window function to smooth out the window’s edges even more.
Line 12 yields a tuple comprising the frequencies and their corresponding magnitudes.

Lastly, you must update your animation code to draw a bar plot of frequencies at each sliding window position:

You pass the overlap percentage in order to adjust the animation’s delay between subsequent frames. Then, you iterate over the frequencies and their magnitudes, plotting them as vertical bars spaced out by the desired gap. You also update the axis limits accordingly.
Run the command below to kick off the spectrogram’s animation:

Play around with the sliding window’s duration and the overlap percentage to see how they affect your animation. Once you get bored and are thirsty for more practical uses of the wave module in Python, you can step up your game by trying something more challenging!
Record an Internet Radio Station as a WAV File
Up until now, you’ve been using abstractions from your waveio package to read and decode WAV files conveniently, which allowed you to focus on higher-level tasks. It’s now time to add the missing piece of the puzzle and implement the WAVReader type’s counterpart. You’ll make a lazy writer object capable of writing chunks of audio data into a WAV file.
For this task, you’ll undertake a hands-on example—stream ripping an Internet radio station to a local WAV file.

Note: Always check the terms of service to see if it’s legal to record a particular live stream, especially with copyrighted content.
If you have a premium account on, then you can use your unique security token to connect to the broadcaster’s server using third-party software. That includes your custom Python script. Alternatively, you might be able to find a public radio station that permits recording of their broadcast for personal and non-commercial use. For example, check out the list of NPR stations.

To simplify connecting to an online stream, you’ll use a tiny helper class to obtain audio frames in real time. Expand the collapsible section below to reveal the source code and instructions on using the RadioStream class that you’ll need later:

The following code depends on pyav, which is a Python binding for the FFmpeg library. After installing both libraries, put this module next to your stream-ripping script so that you can import the RadioStream class from it:

The module defines the RadioStream class, which takes the URL address of an Internet radio station. It exposes the underlying WAV metadata and lets you iterate over the audio channels in chunks. Each chunk is represented as a familiar NumPy array.

Now, create the writer module in your waveio package and use the code below to implement the functionality for incrementally writing audio frames into a new WAV file:

The WAVWriter class takes a WAVMetadata instance and a path to the output WAV file. It then opens the file for writing in binary mode and uses the metadata to set the appropriate header values. Note that the number of audio frames remains unknown at this stage so instead of specifying it, you let the wave module update it later when the file’s closed.
Just like the reader, your writer object follows the context manager protocol. When you enter a new context using the with keyword, the new WAVWriter instance will return itself. Conversely, exiting the context will ensure that the WAV file gets properly closed even if an error occurs.
After creating an instance of WAVWriter, you can add a chunk of data to your WAV file by calling .append_channels() with a two-dimensional NumPy array of channels as an argument. The method will reshape the channels into a flat array of amplitude values and encode them using the format specified in the metadata.
Remember to add the following import statement to your waveio package’s file before moving on:

This enables direct importing of the WAVWriter class from your package, bypassing the intermediate writer module.
Finally, you can connect the dots and build your very own stream ripper in Python:

You open a radio stream using the URL provided as a command-line argument and use the obtained metadata for your output WAV file. Usually, the first chunk of the stream contains information such as the media container format, encoding, frame rate, number of channels, and bit depth. Next, you loop over the stream and append each decoded chunk of audio channels to the file, capturing the volatile moment of a live broadcast.
Here’s an example command showing how you can record the Classic EuroDance channel:

Note that you won’t see any output while the program’s running. To stop recording the chosen radio station and exit your script, hit Ctrl+C on Linux and Windows or Cmd+C if you’re on macOS.
While you can write a WAV file in chunks now, you haven’t actually implemented proper logic for the analogous lazy reading before. Even though you can load a slice of audio data delimited by the given timestamps, it isn’t the same as iterating over a sequence of fixed-size chunks in a loop. You’ll have the opportunity to implement such a chunk-based reading mechanism in what comes next.
Widen the Stereo Field of a WAV File
In this section, you’ll simultaneously read a chunk of audio frames from one WAV file and write its modified version to another file in a lazy fashion. To do that, you’re going to need to enhance your WAVReader object by adding the following method:

It’s a generator method, which takes an optional parameter indicating the maximum number of audio frames to read at a time. This parameter falls back to a default value defined in a constant class attribute. After rewinding the file to its starting position, the method enters an infinite loop, which reads and yields the subsequent chunks of channel data until there are no more audio frames to read.

Note: If a chunk was a regular Python list or another sequence type, then you could simplify your function by using the Walrus operator (:=). More specifically, you could directly evaluate the chunk as the loop’s continuation condition:

This relies on the implicit conversion of an expression to a Boolean value in a logical context. Python generally treats empty collections as falsy while non-empty ones as truthy. However, that’s not the case for NumPy arrays, which deem such behavior ambiguous and forbid it altogether.

Like most other methods and properties in this class, .channels_lazy() is decorated with @reshape to arrange the decoded amplitudes in a more convenient way. Unfortunately, this decorator acts on a NumPy array, whereas your new method returns a generator object. To make them compatible, you must update the decorator’s definition by handling two cases:

You use the inspect module to determine if your decorator wraps a regular method or a generator method. Both wrappers do the same thing, but the generator wrapper yields the reshaped values in each iteration, while the regular method wrapper returns them.
Lastly, you can add another property that’ll tell you whether your WAV file is a stereo one or not:

It checks if the number of channels declared in the file’s header is equal to two.
With these changes in place, you can read your WAV files in chunks and start applying various sound effects. For example, you can widen or narrow the stereo field of an audio file to enhance or diminish the sensation of space. One such technique involves converting a conventional stereo signal comprising the left and right channels into the mid and side channels.
The mid channel (M) contains a monophonic component that’s common to both sides, while the side channel (S) captures the differences between the left (L) and right (R) channels. You can convert between both representations using the following formulas:
Conversion Between the Mid-Side and Left-Right Channels

When you’ve separated the side channel, you can boost it independently from the mid one before recombining them to the left and right channels again. To see this in action, create a script named that takes paths to the input and output WAV files as arguments with an optional strength parameter:

The –strength parameter is a multiplier for the side channel. Use a value greater than one to widen the stereo field and a value between zero and one to narrow it.
Next, implement the channel conversion formulas as Python functions:

Both take two parameters corresponding to either the left and right, or the mid and side channels while returning a tuple of the converted channels.
Finally, you can open a stereo WAV file for reading, loop through its channels in chunks, and apply the explained mid-side processing:

The output WAV file has the same encoding format as the input file. After converting each chunk into the mid and side channels, you convert them back to the left and right channels while boosting the side one only.
Notice that you append the modified channels as separate arguments now, whereas your radio recording script passed a single NumPy array of combined channels. To make the .append_channels() method work with both types of invocations, you can update your WAVWriter class as follows:

You use structural pattern matching again to differentiate between a single multi-channel array and multiple single-channel arrays, reshaping them into a flat sequence of amplitudes for encoding accordingly.
Try boosting one of the sample WAV files, such as the bicycle bell sound, by a factor of five:

Remember to choose a stereo sound, and for the best listening experience, use your external loudspeakers instead of headphones to play back the output file. Can you hear the difference?
You’ve covered a lot of ground in this tutorial. After learning about the WAV file’s structure, you got to grips with Python’s wave module for reading and writing raw binary data. Then, you built your own abstractions on top of the wave module so you could think about the audio data in higher-level terms. This, in turn, allowed you to implement several practical and fun audio analysis and processing tools.
In this tutorial, you’ve learned how to:

Read and write WAV files using pure Python
Handle the 24-bit PCM encoding of audio samples
Interpret and plot the underlying amplitude levels
Record online audio streams like Internet radio stations
Animate visualizations in the time and frequency domains
Synthesize sounds and apply special effects

Now that you have so much experience working with WAV files in Python under your belt, you can tackle even more ambitious projects. For instance, based on your knowledge of calculating the Fourier transform and processing WAV files lazily, you can implement a vocoder effect, popularized by the German band Kraftwerk in the seventies. It’ll make your voice sound like a musical instrument or a robot!

Take the Quiz: Test your knowledge with our interactive “Reading and Writing WAV Files in Python” quiz. Upon completion you will receive a score so you can track your learning progress over time:
Take the Quiz »