Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

a SOLUTION for SOUND! audio()

edited August 2 in General Posts: 101

I have been thinking a ton about sound since I started using Codea,

I have searched the forum and found that there have been many others that share my interest, but for the most part what was needed was unknown, and I have put a great deal of thought into a simple way to extend codea that would allow low level DSP and much better control over sound, and it requires one main thing:

there needs to be access to the main implicit sound thread used for supplying the buffer directly.

I hear you say: "but there already is that, the sound.buffer object", and you are partially correct.
but there is no place to access this on a sample level.

it would look like this:

function setup()
  --setup here
end

function draw()
  --draw here
end

function touch()
  --touch here
end

function audio()
  --AUDIO HERE
end

this audio() would be working on a buffer level (analogous to the frame level of the draw thread) and calculations performed within it would be done in a way analogous to image:get()/set() API,

setting a single value in the buffer with
audio:set(buffer, value)
getting a value from the audio input with
audio:get(buffer, value)

the sound object can easily be a part of this, the point is accessing the in and out buffers
A buffer is just an array of 32 bit floats -1.0 to 1.0 ,
or if desired, ints for fixed point 8/16/24 bit which would be translated to the native output.

I am assuming this would provide access to the framework that codea is currently using to establish an audio output, but in a lower level way that makes sense along the current established paradigms.

It would only require the addition of an input stream from the mic, and even that is not up-front critical.
but that would add another thing that many people would definately like : CAMERA + AUDIO

this would also potentially make SCREEN CAPTURE with voiceover and sound effects possible

this thread is already there and working. We just need access to it. Then inspired people like myself can make things like the SODA / Cider of audio.

the audio thread would not be essential to use for many people who are ok with using the current sound techniques and would provide backward compatibility moving forward.

We have frame level, pixel level, gpu level, interaction via touch level, why not audio level?

directors cut:


I have done extensive experiments using sound being called in the draw function with buffers of 44100/60 but that breaks down because of the nature of a draw thread, it has to be flexible in length and timing to accomodate for overload of draw requests it can slow the rate. I tried using DeltaTime in various ways to generate the buffer length, but that also falls short for many reasons as it relies on the last DeltaTime. I will continue to try for the sake of my own fun, cause I like making things do stuff they are not usually meant to do (im a hacker :P`)

the fact is there is no current place to call a sample level sound object that is able to manipulate or deal with sound, and that is Ok, as Codea was made for games and simplifying the process of creating them, and sound is usually considered a secondary element in that endervor. But there has been access to a per frame draw function, and a per touch function, etc.. and there isn't currently an actual logical place to put a sound function within that framework because the reality is it requires a root level sound function.

the API and current sound functionality appear to be not fully working, which I have experienced directly and confirmed through many threads. this is because things are fully structured in an atypical way that I can fix. I want to fix, and it will be easy for me to do when I have access to the thread. For that matter, I know DSP inside and out and I am loving codea, and want to help grow it. This is the only thing I need.

Tagged:

Comments

  • Posts: 101

    I just was looking at the TouchLua+ app for iOS to see what that's like, which for the most part is nothing in comparison to Codea, but they seem to have an audio solution similar in some ways to my proposal.

    You will notice for the most part it appears to be the same functionality as Codea's sound() API, but when you get down to the user buffer there is a bit of a difference which presents a similar but potentially more compatible variation on my proposed solution. I will put the user buffer section first and then the full API.

    playbuffer – play user buffer
    
        played  = audio.playebuffer( buffer, volume, pitch,  pan )
    
        in:
        buffer : table with the following 
            rate : number, sampling rate - usually 44100 Hz
            channels : number, mono (1) or stereo (2)
            bits : number, bits per sample, should be 16
            data : table having sound samples (table length = rate * channels * time_in_sec)
        volume : number, 0 to 1.0   (default is 1.0)
        pitch : number, sound pitch (default is 1.0)
        pan : number, sound pan -1.0 to 1.0  (default is 0.0)
    
        out:
        played : boolean, true if no error
    

    ALSO NOTE SAVEBUFFER to WAV file, this would be very useful/critical for music/sound design apps:


    savebuffer – save user sound buffer as wav file saved = audio.savebuffer( file, buffer) in: volume : string, file name (path), you can use relative path to access resources path use ‘@resources/filename.ext’ buffer : table with the following fields rate : number, sampling rate - usually 44100 Hz channels : number, mono (1) or stereo (2) bits : number, bits per sample, should be 16 data : table having sound samples (table size = rate * channels * time_in_sec) out: saved : boolean, true if no error
    Audio Library
    
    Audio Library enables to play music and sound effects.
    
    Audio Library Capabilities:
    - Play one music track at once
    - Play multiple sound effects synchronously
    - Create and play your own sounds and save them as files
    
    Module name is audio. 
    
    playbg – play background music
    
        played  = audio.playbg( file, loop, volume, pan )
    
        in:
        file : string, file name (path), you can use relative path
            to access resources path use ‘@resources/filename.ext’
            default value is the one provided from preloadpg()
        loop : boolean, repeat playing (default is false)
        volume : number, 0 to 1.0   (default is 1.0)
        pan : number, sound pan -1.0 (far left) to 1.0 (far right),  (default is 0.0)
    
        out:
        played : boolean, true if no error
        all parameters are optional 
        (if you want to provide a parameter all previous parameters should be 
               supplied)
    
    preloadbg – cache background music into memory
    
        loaded  = audio.preloadbg( file )
    
        in:
        file : string, file name (path), you can use relative path
            to access resources path use ‘@resources/filename.ext’
    
        out:
        loaded : boolean, true if no error
    
    setbgvolume – change background music volume
    
        audio.setbgvolume( volume )
    
        in:
        volume : number, 0 to 1.0   (default is 1.0)
    
    bgvolume – get background music volume
    
        volume  = audio.bgvolume( )
    
        out:
        volume : number, 0 to 1.0
    
    pausebg – pause playing background music
    
        audio.puasepg()
    
    bgpaused –  check if background music is paused
    
        flag = audio.pgpaused()
    
        out:
        flag : boolean, true if background is paused
    
    resumebg – resume playing background music
    
        audio.resumepg()
    
    stopbg – stop background music
    
        audio.stoppg()
    
    mutebg – mute background music
    
        audio.mutepg()
    
    bgmuted – check if background music is muted
    
        flag = audio.pgmuted()
    
        out:
        flag : boolean, true if background is muted
    
    unmutebg – unmute background music
    
        audio.unmutepg()
    
    bgplaying– check if background music is currently playing
    
        flag = audio.pgplaying()
    
        out:
        flag : boolean, true if background is playing
    
    playeffect – play effect sound
    
        played  = audio.playeffect( file, volume, pitch,  pan )
    
        in:
        file : string, file name (path), you can use relative path
            to access resources path use ‘@resources/filename.ext’
        volume : number, 0 to 1.0   (default is 1.0)
        pitch : number, sound pitch (default is 1.0)
        pan : number, sound pan -1.0 (far left) to 1.0 (far right),  (default is 0.0)
    
        out:
        played : boolean, true if no error
    
        volume, pitch, and pan are optional 
    
    preloadeffect – cache effect sound into memory
    
        loaded  = audio.preloadeffect( file )
    
        in:
        file : string, file name (path), you can use relative path
            to access resources path use ‘@resources/filename.ext’
    
        out:
        loaded : boolean, true if no error
    
    unloadeffect – uncache effect sound from memory
    
        unloaded  = audio.unloadeffect( file )
    
        in:
        file : string, file name (path), you can use relative path
            to access resources path use ‘@resources/filename.ext’
    
        out:
        unloaded : boolean, true if no error
    
    unloadalleffects – uncache all sound effects from memory
    
        audio.unloadalleffects( )
    
    playbuffer – play user buffer
    
        played  = audio.playebuffer( buffer, volume, pitch,  pan )
    
        in:
        buffer : table with the following 
            rate : number, sampling rate - usually 44100 Hz
            channels : number, mono (1) or stereo (2)
            bits : number, bits per sample, should be 16
            data : table having sound samples (table length = rate * channels * time_in_sec)
        volume : number, 0 to 1.0   (default is 1.0)
        pitch : number, sound pitch (default is 1.0)
        pan : number, sound pan -1.0 to 1.0  (default is 0.0)
    
        out:
        played : boolean, true if no error
    
    savebuffer – save user sound buffer as wav file
    
        saved  = audio.savebuffer( file, buffer)
    
        in:
        volume : string, file name (path), you can use relative path
            to access resources path use ‘@resources/filename.ext’
        buffer : table with the following fields
            rate : number, sampling rate - usually 44100 Hz
            channels : number, mono (1) or stereo (2)
            bits : number, bits per sample, should be 16
            data : table having sound samples (table size = rate * channels * time_in_sec)
    
        out:
        saved : boolean, true if no error
    
    stopalleffects – stop playing all sound effects
    
        audio.stopalleffects( )
    
    stopeverything – stops background music and all sound effects
    
        audio.stopeverything( )
    
    pauseall – pause background music and all sound effects
    
        audio.pauseall( )
    
    allpaused – check if all sounds are paused
    
        audio.allpaused( )
    
    resumeall – resume palying all sounds
    
        audio.unmuteall( )
    
    muteall – mute background music and all sound effects
    
        audio.muteall( )
    
    allmuted – check if all sounds are muted
    
        audio.allmuted( )
    
    unmuteall – unmute all sounds
    
        audio.unmuteall( )
    
    
    
  • JohnJohn Admin Mod
    Posts: 446

    @AxiomCrux I am interested in opening up Codea's sound generation tools to allow for the kind of things you are interested in. This issue now is just finding time while working on things that will benefit the majority of users. I think a sound overhaul is definitely due and something we will consider in our roadmap.

  • Posts: 101

    @John I may have overcomplicated my explanation / idea by posting the follow up to my initial idea. The realization I had which I was initially trying to propose is likely relatively simple to impliment, and paves the way toward a new and more open ended solution without requiring too much of an overwhelming overhaul up front.

    to summarize:
    Initially all that is needed:
    The addition of an audio() root level function in the main tab that provides write access directly to the output buffer that is currently under the hood / behind the scenes.

    I assume this is a power of 2 like 128 / 256 / 512 samples, which is pretty standard.

    I have thought through quite a bit more, if you would like I can elaborate and if you have a chance to look over my initial post, let me know if this seems a viable route.

  • Posts: 137

    Just dropping In to show my support for this!

  • edited August 17 Posts: 449

    I appreciate the afford you put into this, but I think your api proposal is a little over complicated.. or maybe you choose weird names.. anyway... I second the idea of having more audio options and hardware access (like the mic). Hopefully in the near future after ARKit was added :D *spoiler*

Sign In or Register to comment.