"## Tones of 4 instruments in the C4-C5 pitch range"
...
...
@@ -79,6 +85,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "162d8c5f",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -100,6 +107,7 @@
},
{
"cell_type": "markdown",
"id": "f14792ce",
"metadata": {},
"source": [
"**Question:** listen to a B4 Flute tone"
...
...
@@ -108,6 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "fd8ddc05",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -118,6 +127,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "4cf6149e",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -127,6 +137,7 @@
},
{
"cell_type": "markdown",
"id": "8d587d16",
"metadata": {},
"source": [
"**Question:** define a function selectRms(y, sr) that computes the amplitude envelope of this tone using the librosa.feature.rms method and select the audio corresponding to the time frame of 8 hop_length before the maximum value of the envelope and 16 hop_length after."
...
...
@@ -135,6 +146,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "6637c013",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -152,6 +164,7 @@
},
{
"cell_type": "markdown",
"id": "faa4a89b",
"metadata": {},
"source": [
"## Feature set (MFCCs and Chromas)\n",
...
...
@@ -162,6 +175,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b5063953",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -184,6 +198,7 @@
},
{
"cell_type": "markdown",
"id": "98036f27",
"metadata": {},
"source": [
"## Display of the features sets\n",
...
...
@@ -202,6 +217,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "22340a35",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -236,6 +252,7 @@
},
{
"cell_type": "markdown",
"id": "1164aab6",
"metadata": {},
"source": [
"**Answer:** here"
...
...
@@ -243,6 +260,7 @@
},
{
"cell_type": "markdown",
"id": "e4856c4f",
"metadata": {},
"source": [
"**Question:** from the data in track.avg_mfcc, build a matrix X of shape [len(my_tracks), 12], and standardize it row-wise.\n",
...
...
@@ -259,6 +277,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "efa62ea2",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -290,6 +309,7 @@
},
{
"cell_type": "markdown",
"id": "ee0406ae",
"metadata": {},
"source": [
"**Answer:** here"
...
...
@@ -312,7 +332,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.8.8"
}
},
"nbformat": 4,
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id:b513b4fa tags:
## Import tools
%% Cell type:code id: tags:
%% Cell type:code id:6ca01fe4 tags:
``` python
# deal with matrices
importnumpyasnp
# progress bar
importtqdm
# principal component analysis
fromsklearn.decompositionimportPCA
# handle musical datasets
importmirdata
# deal with audio data
importlibrosa
fromlibrosa.displayimportspecshow
# play audio
importIPython.displayasipd
# handle display
%matplotlibinline
frommatplotlibimportpyplotasplt
frommatplotlibimportcm
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id:e75f9acd tags:
## Retrieve the tinysol dataset
https://forum.ircam.fr/projects/detail/tinysol
%% Cell type:code id: tags:
%% Cell type:code id:e04e4326 tags:
``` python
tinysol=mirdata.initialize('tinysol')
tinysol.download()
# run this line in case of inconsistant results that may be due to database corruption
# tinysol.validate()
```
%% Cell type:code id: tags:
%% Cell type:code id:76448ed6 tags:
``` python
# listen a random example track
example_track=tinysol.choice_track()
ipd.Audio(example_track.audio_path)
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id:0a254d5d tags:
## Tones of 4 instruments in the C4-C5 pitch range
%% Cell type:code id: tags:
%% Cell type:code id:162d8c5f tags:
``` python
# select pitch range
low_pitch=librosa.note_to_midi("C4")
high_pitch=librosa.note_to_midi("C5")
# select instruments
my_instruments=[
"Clarinet in Bb","Flute","Violin","Cello"
]
# build selector
my_tracks={
track_id:track
fortrack_id,trackintinysol.load_tracks().items()
iflow_pitch<=track.pitch_id<high_pitch
andtrack.instrument_fullinmy_instruments
}
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id:f14792ce tags:
**Question:** listen to a B4 Flute tone
%% Cell type:code id: tags:
%% Cell type:code id:fd8ddc05 tags:
``` python
# answer here
####
```
%% Cell type:code id: tags:
%% Cell type:code id:4cf6149e tags:
``` python
# frame rate of representations in samples
hop_length=512
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id:8d587d16 tags:
**Question:** define a function selectRms(y, sr) that computes the amplitude envelope of this tone using the librosa.feature.rms method and select the audio corresponding to the time frame of 8 hop_length before the maximum value of the envelope and 16 hop_length after.
%% Cell type:code id: tags:
%% Cell type:code id:6637c013 tags:
``` python
defselectRms(y,# audio signal
sr# sampling rate
):
# answer here
####
####
ys=selectRms(*fluteB4[0].audio)
print(f'Original size: {y.shape[0]}. New size: {ys.shape[0]}')
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id:faa4a89b tags:
## Feature set (MFCCs and Chromas)
**Question:** compute the MFCCs and the Chroma representations for each track. Once computed