Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FFmpeg Error: Error: Output format wav is not available #1268

Open
3 tasks done
AaronCLH opened this issue May 1, 2024 · 3 comments
Open
3 tasks done

FFmpeg Error: Error: Output format wav is not available #1268

AaronCLH opened this issue May 1, 2024 · 3 comments

Comments

@AaronCLH
Copy link

AaronCLH commented May 1, 2024

I am running these code in localhost just testing out Microsoft Azure speech recognition. The idea is to record my voice using my headset, then use FFmpeg to convert the data to WAV format and send the audio to Microsoft Azure. But again and again, FFmpeg gave me this error saying the Output format wav is not available.

I tried to do this manually on command prompt without any problem:
ffmpeg -i input.mp3 -acodec pcm_s16le -ar 16000 -ac 1 output.wav

I tried to run the FFmpeg in the code without the toFormat function without any problem:
// Convert MP3 to WAV
ffmpeg('testing1.mp3')
.output('output.wav')
.on('end', function() {
console.log('Conversion to WAV completed');
})
.on('error', function(err) {
console.error('Error:', err);
})
.run();

These confirm that the ffmpeg was installed correctly. But as long as I start running the code as below, it kept telling me output format wav is not available. I tried calling getAvailableFormats, and it returns an empty array.

I installed all the packages this week (May 2024), so all the versions should be up-to-date.

Please help!

Version information

  • fluent-ffmpeg version: [email protected]
  • ffmpeg version: ffmpeg-7.0-full_build
  • OS: Windows 11 Pro OS build: 22631.3527

Code to reproduce

const express = require('express');
const cors = require('cors');
const multer = require('multer');
const upload = multer({ storage: multer.memoryStorage() });
const speechsdk = require('microsoft-cognitiveservices-speech-sdk');
const ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath("C:\\node_modules\\ffmpeg-7.0-full_build\\bin\\ffmpeg.exe");
ffmpeg.setFfprobePath("C:\\node_modules\\ffmpeg-7.0-full_build\\bin\\ffprobe.exe");
const stream = require('stream');

const app = express();
app.use(cors());

const speechConfig = speechsdk.SpeechConfig.fromSubscription("SubscriptionNumber", "Location");

app.post('/transcribe', upload.single('audio'), (req, res) => {
    if (!req.file) {
        return res.status(400).send({ error: 'No audio file uploaded' });
    }

    const bufferStream = new stream.PassThrough();
    bufferStream.end(req.file.buffer);
    const ffmpegStream = new stream.PassThrough();

    ffmpeg(bufferStream)
        .audioCodec('pcm_s16le')
        .audioFrequency(16000)
        .audioChannels(1)
        .toFormat('wav')
        .on('end', () => console.log('Conversion finished.'))
        .on('error', function(err, stdout, stderr) {
            console.error('FFmpeg Error:', err);
            console.error('FFmpeg Stderr:', stderr);
            res.status(500).send({ error: 'Error converting audio', details: stderr });
        })
        .pipe(ffmpegStream, { end: true });

    const pushStream = speechsdk.AudioInputStream.createPushStream();
    ffmpegStream.on('data', chunk => pushStream.write(chunk));
    ffmpegStream.on('end', () => {
        pushStream.close();
        const audioConfig = speechsdk.AudioConfig.fromStreamInput(pushStream);
        const recognizer = new speechsdk.SpeechRecognizer(speechConfig, audioConfig);

        recognizer.recognizeOnceAsync(result => {
            if (result.reason === speechsdk.ResultReason.RecognizedSpeech) {
                res.send({ text: result.text });
            } else {
                res.status(500).send({ error: 'Failed to recognize speech', details: result });
            }
        });
    });
});

const port = 3001;
app.listen(port, () => {
    console.log(`Server running on port 3001`);

    // Check and log available formats when the server starts
    const { exec } = require('child_process');
    exec('ffmpeg -i testing1.mp3 -acodec pcm_s16le -ar 16000 -ac 1 output1.wav', (error, stdout, stderr) => {
      if (error) {
        console.error('Error executing FFmpeg:', error);
        return;
      }
      console.log('Success:', stdout);
    });


    ffmpeg.ffprobe('testing1.mp3', function(err, metadata) {
        if (err) {
            console.error('Error:', err);
        } else {
            console.log('Metadata:', metadata);
        }
    });
    // Convert MP3 to WAV
    const ffmpegStream = new stream.PassThrough();
    ffmpeg('testing1.mp3')
        .audioCodec('pcm_s16le')
        .audioFrequency(16000)
        .audioChannels(1)
        .outputOptions(['-f wav', '-v verbose'])  // Adding verbose logging
        .on('end', () => console.log('Conversion finished.'))
        .on('error', function(err, stdout, stderr) {
            console.error('FFmpeg Error:', err);
            console.error('FFmpeg Stderr:', stderr);
        })
        .pipe(ffmpegStream, { end: true });

});
import React, { useState } from 'react';

function App() {
    const [isRecording, setIsRecording] = useState(false);
    const [transcription, setTranscription] = useState('');
    const [error, setError] = useState('');

    const startRecording = async () => {
        if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
            try {
                const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
                const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
                let audioChunks = [];

                mediaRecorder.ondataavailable = event => {
                    audioChunks.push(event.data);
                };

                mediaRecorder.onstop = async () => {
                    const audioBlob = new Blob(audioChunks, { type: 'audio/webm' });
                    const formData = new FormData();
                    formData.append('audio', audioBlob);

                    fetch('http://localhost:3001/transcribe', {
                        method: 'POST',
                        body: formData,
                    })
                    .then(response => {
                        if (response.ok) return response.json();
                        throw new Error('Failed to fetch transcription');
                    })
                    .then(data => {
                        setTranscription(data.text);
                        setError('');
                    })
                    .catch(error => {
                        console.error('Error:', error);
                        setError('Failed to transcribe audio.');
                    });
                };

                mediaRecorder.start();
                setIsRecording(true);
                setTimeout(() => {
                    mediaRecorder.stop();
                    setIsRecording(false);
                }, 5000);
            } catch (error) {
                console.error('Error accessing the microphone:', error);
                setError('Error accessing the microphone.');
            }
        }
    };

    return (
        <div className="App">
            <header className="App-header">
                <h1>ATC Speech to Text</h1>
                <button onClick={startRecording} disabled={isRecording}>
                    {isRecording ? 'Recording...' : 'Start Recording'}
                </button>
                <p>{transcription ? `Transcription: ${transcription}` : ''}</p>
                <p>{error}</p>
            </header>
        </div>
    );
}

export default App;

(note: if the problem only happens with some inputs, include a link to such an input file)

Expected results

MP3 being recorded by mic and converted to WAV format by ffmpeg and submit to Microsoft Azure Speech Recognition for transcription

Observed results

FFmpeg Error: Error: Output format wav is not available

FFmpeg Error: Error: Output format wav is not available
at C:\Users\node_modules\fluent-ffmpeg\lib\capabilities.js:589:21
at nextTask (C:\Users\node_modules\async\dist\async.js:5791:13)
at next (C:\Users\node_modules\async\dist\async.js:5799:13)
at C:\Users\node_modules\async\dist\async.js:329:20
at C:\Users\fluent-ffmpeg\lib\capabilities.js:549:7
at handleExit (C:\Users\aaron\liveatc_transcriber\node_modules\fluent-ffmpeg\lib\processor.js:170:11)
at ChildProcess. (C:\Users\aaron\liveatc_transcriber\node_modules\fluent-ffmpeg\lib\processor.js:184:11)
at ChildProcess.emit (node:events:518:28)
at ChildProcess._handle.onexit (node:internal/child_process:294:12)
FFmpeg Stderr: undefined

Checklist

  • I have read the FAQ
  • I tried the same with command line ffmpeg and it works correctly (hint: if the problem also happens this way, this is an ffmpeg problem and you're not reporting it to the right place)
  • I have included full stderr/stdout output from ffmpeg
@soylomass
Copy link

Same

@MarianoFacundoArch
Copy link

#1266

@totallytavi
Copy link
Contributor

@njoyard Old issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants