The problem
Memory decline is one of the most challenging aspects of aging, impacting comfort, security, and independence. For seniors, daily life can be disrupted by simple yet potentially devastating oversights. For instance, leaving a faucet running might seem minor but can quickly escalate into extensive water damage—soaked floors, ruined furnishings, and costly repairs. Worse, it poses safety risks like electrocution or flooding.
This issue isn’t exclusive to seniors. Children and busy adults can also forget to turn off faucets, contributing to significant water waste and inflated utility bills. According to the U.S. Environmental Protection Agency (EPA), a faucet running for just five minutes wastes an astonishing ten gallons of water—underscoring the urgency of finding a solution.
The solution
To tackle this pressing problem, I developed a smart device that combines Artificial Intelligence and the Internet of Things (AIoT). This device uses a sensitive microphone to detect the sound of running water and instantly sends an alert to the user, reminding them to turn it off. Designed to conserve water and enhance safety, this innovative solution is a step toward protecting both households and the environment.
Materials used in this project
Hardware components
- Particle Photon 2
- Adafruit PDM MEMS Microphone Breakout
- Buzzer
- Jumper Wires
- Breadboard
- 3D-printed case
Software apps and online services
- Particle Console
- Edge Impulse Studio
- Visual Studio Code
- Pushcut
Hand tools and fabrication machines
- Soldering Kit
- 3D Printer
Hardware Selection
This project needs a reliable, low-powered, and Wi-Fi enabled device that is also cost-effective for sending alert messages to the cloud and mobile phones. We will be performing TensorFlow Lite model inference using Edge Impulse, and for this purpose, we will utilize the Particle Photon 2, which has ample memory for ML models. Additionally, we are incorporating the Adafruit PDM microphone breakout for audio data sampling. We used a breadboard and jumper wires to assemble the hardware.
Schematics
3D Case
To protect the hardware from water splashes, which will be installed near a faucet, we have designed a 3D case allowing the microphone to listen to its surroundings effectively.
While the case is primarily designed to provide power to the device from a wall power outlet, it has been thoughtfully designed with ample space to accommodate a compact LiPo battery, allowing for versatile power options.
We adorned the lid with vibrant hues to evoke a lively and dynamic aesthetic, enhancing its gadget-like appeal.
Model creation and training
We will use Edge Impulse Studio to train and build a TensorFlow Lite model. We need to register an account and create a new project at https://studio.edgeimpulse.com. We are using a prebuilt dataset for detecting whether a faucet is running based on audio. It contains 15 minutes of data sampled from a microphone at 16KHz over the following two classes:
- Faucet – faucet is running, with a variety of background activities.
- Noise – just background activities.
We can import this dataset to the Edge Impulse Studio project using the Edge Impulse CLI Uploader.
Please follow the instructions provided at the link below to install Edge Impulse CLI:
https://docs.edgeimpulse.com/docs/cli-installation
The datasets can be downloaded from here:
https://cdn.edgeimpulse.com/datasets/faucet.zip
Execute the following commands to upload the data to Edge Impulse Studio.
$ unzip faucet.zip $ cd faucet $ edge-impulse-uploader --clean Edge Impulse uploader v1.16.0 ? What is your user name or e-mail address (edgeimpulse.com)?
You will be prompted for your username, password, and the project where you want to add the dataset.
$ edge-impulse-uploader --category training \ faucet/training/*.cbor $ edge-impulse-uploader --category testing \ faucet/testing/*.cbor
After uploading is finished we can see the data on the Data Acquisition page.
In the Impulse Design > Create Impulse page, we can add a processing block and learning block. After some experimentation, we have decided to utilize 2000ms of audio data for improved prediction. We have chosen MFE for the processing block which extracts a spectrogram from audio signals using Mel-filterbank energy features, great for non-voice audio, and for the learning block, we have chosen the Classification which learns patterns from data and can apply these to new data for recognizing audio.
Now we need to generate features in the Impulse Design > MFE page. We can go with the default parameters.
After clicking on the Save Parameters button the page will redirect to the Generate Features page where we can start generating features which would take a few minutes. After feature generation, we can see the output in the Feature Explorer.
Now we can go to the Impulse Design > Classifier page where we can define the Neural Network architecture. We are using a 1-D convolutional neural network which is suitable for audio classification.
After finalizing the architecture, we can start training which will take a couple of minutes to finish. We can see the accuracy and confusion matrix below.
For such a small dataset 100% accuracy is pretty good so we will use this model.
Model testing
We can test the model on the test datasets by going to the Model testing page and clicking on the Classify all button. The model has 100% accuracy on the test datasets as well, so we are confident that the model should work in a real environment.
Model deployment
The Edge Impulse Studio supports Particle Photon 2, so on the Deployment page, we will choose the Particle library option.
For the Select optimizations option, we will choose Tensorflow Lite option. Also, we will opt for the Quantized (Int8) model. Now click the Build button, and in a few seconds, the library bundle will be downloaded to your local computer. We will utilize this library when we begin the application development.
Set up Photon 2
Before starting to run the application make sure to set up your Photon 2. Please go to setup.particle.io and follow the instructions.
Configure alerts
For this project, we wanted to use a mobile app to get alerts and notifications. We will use the Pushcut app for iOS, which allows us to create actionable notifications using a webhook. The Pushcut app can be installed using the Apple App store.
In the Pushcut app, we will create a new notification.
Once we have successfully saved the configuration, make sure to copy the autogenerated Webhook URL. This URL will be crucial for setting up the cloud integration in the Particle console later on.
Cloud services integration
Particle Cloud service integrations allow for efficient interaction with internet-based services, including both third-party and custom options. We will be integrating the Pushcut app as a custom webhook integration. First, we need to select the device we want to configure in the Particle Console.
Click on the Cloud services > Integrations from the left menu.
Then select the Custom Webhook from the gallery.
We need to configure the custom webhook as shown below. The chosen event name is “alert,” which will be set in the firmware code later. Copy and paste the webhook URL from the Pushcut app. The device name can be found on the Devices page of the Particle Console.
Application Workflow
A clear understanding of the application workflow can be obtained from the diagram below.
Firmware
In the library bundle downloaded from the Edge Impulse Deployment page, we should replace the code in main.cpp with the code provided below.
#define EIDSP_QUANTIZE_FILTERBANK 0 #define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 4 #include "Microphone_PDM.h" #include "Particle.h" #include <RFv2_inferencing.h> #define CONTINUOUS_THRESOLD_SECS (10) #define PUBLISH_THRESOLD_SECS (30) #define FAUCET_IDX 0 #define NOISE_IDX 1 SYSTEM_THREAD(ENABLED); SerialLogHandler logHandler(LOG_LEVEL_ERROR); static bool microphone_inference_start(uint32_t n_samples); static bool microphone_inference_record(void); static void microphone_inference_end(void); static int microphone_audio_signal_get_data(size_t offset, size_t length, float *out_ptr); void buzz(); /** Audio buffers, pointers and selectors */ typedef struct { signed short *buffers[2]; unsigned char buf_select; unsigned char buf_ready; unsigned int buf_count; unsigned int n_samples; } inference_t; static inference_t inference; static bool record_ready = false; static signed short *sampleBuffer; static bool debug_nn = false; static int print_results = -(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW); int buzzerPin = A5; int melody[] = {262, 294, 330, 349, 392, 440, 494, 523}; int noteDurations[] = {4, 4, 4, 4, 4, 4, 4, 4}; uint32_t continous_faucet_running_start_time; uint32_t last_notification_sent_time; uint8_t prev_prediction = NOISE_IDX; bool rgb_ctrl = false; /** * @brief Particle setup function */ void setup() { pinMode(buzzerPin, OUTPUT); delay(500); ei_printf("Edge Impulse inference runner for Particle devices\r\n"); ei_printf("Inferencing settings:\n"); ei_printf("\tInterval: %.2f ms.\n", (float)EI_CLASSIFIER_INTERVAL_MS); ei_printf("\tFrame size: %d\n", EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE); ei_printf("\tSample length: %d ms.\n", EI_CLASSIFIER_RAW_SAMPLE_COUNT / 16); ei_printf("\tNo. of classes: %d\n", sizeof(ei_classifier_inferencing_categories) / sizeof(ei_classifier_inferencing_categories[0])); run_classifier_init(); if (microphone_inference_start(EI_CLASSIFIER_SLICE_SIZE) == false) { ei_printf("ERR: Could not allocate audio buffer (size %d), this could be due to the window length of your model\r\n", EI_CLASSIFIER_RAW_SAMPLE_COUNT); return; } last_notification_sent_time = Time.now(); } /** * @brief Particle main function. Runs the inferencing loop. */ void loop() { bool m = microphone_inference_record(); if (!m) { ei_printf("ERR: Failed to record audio...\n"); return; } signal_t signal; signal.total_length = EI_CLASSIFIER_SLICE_SIZE; signal.get_data = µphone_audio_signal_get_data; ei_impulse_result_t result = {0}; EI_IMPULSE_ERROR r = run_classifier_continuous(&signal, &result, debug_nn); if (r != EI_IMPULSE_OK) { ei_printf("ERR: Failed to run classifier (%d)\n", r); return; } if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) { ei_printf("Predictions "); ei_printf("(DSP: %d ms., Classification: %d ms.)", result.timing.dsp, result.timing.classification); ei_printf(": \n"); for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) { ei_printf(" %s: %.5f\n", result.classification[ix].label, result.classification[ix].value); } // above 80% confidence score if (result.classification[FAUCET_IDX].value > 0.8f) { if (rgb_ctrl == false) { RGB.control(true); rgb_ctrl = true; } RGB.color(0, 0, 255); uint32_t current_time = Time.now(); if (prev_prediction == FAUCET_IDX) { if ((current_time - continous_faucet_running_start_time) > CONTINUOUS_THRESOLD_SECS) { ei_printf("Faucet running time: %ld\n", (current_time - continous_faucet_running_start_time)); if (current_time - last_notification_sent_time > PUBLISH_THRESOLD_SECS) { RGB.color(255, 0, 0); buzz(); Particle.publish("alert", "running faucet", PRIVATE); last_notification_sent_time = current_time; } } } else { // reset counter continous_faucet_running_start_time = current_time; ei_printf("Faucet running time reset\n"); } prev_prediction = FAUCET_IDX; } else { prev_prediction = NOISE_IDX; if (rgb_ctrl == true) { RGB.control(false); rgb_ctrl = false; } } print_results = 0; } } static int16_t *sptr; static uint32_t sample_length = 0; /** * @brief PDM buffer full callback * Get data and call audio thread callback */ static void pdm_data_ready_inference_callback(void) { bool dma_ready = Microphone_PDM::instance().noCopySamples([](void *pSamples, size_t numSamples) { sample_length = Microphone_PDM::instance().getBufferSizeInBytes() / 2; sptr = (int16_t *)pSamples; }); if (record_ready == true && dma_ready) { for (int i = 0; i < sample_length; i++) { inference.buffers[inference.buf_select][inference.buf_count++] = sptr[i]; if (inference.buf_count >= inference.n_samples) { inference.buf_select ^= 1; inference.buf_count = 0; inference.buf_ready = 1; } } } } /** * @brief Init inferencing struct and setup/start PDM * * @param[in] n_samples The n samples * * @return { description_of_the_return_value } */ static bool microphone_inference_start(uint32_t n_samples) { inference.buffers[0] = (signed short *)malloc(n_samples * sizeof(signed short)); if (inference.buffers[0] == NULL) { return false; } inference.buffers[1] = (signed short *)malloc(n_samples * sizeof(signed short)); if (inference.buffers[1] == NULL) { free(inference.buffers[0]); return false; } sampleBuffer = (signed short *)malloc((n_samples >> 1) * sizeof(signed short)); if (sampleBuffer == NULL) { free(inference.buffers[0]); free(inference.buffers[1]); return false; } inference.buf_select = 0; inference.buf_count = 0; inference.n_samples = n_samples; inference.buf_ready = 0; int err = Microphone_PDM::instance() .withOutputSize(Microphone_PDM::OutputSize::SIGNED_16) .withRange(Microphone_PDM::Range::RANGE_2048) .withSampleRate(16000) .init(); if (err) { ei_printf("PDM decoder init err=%d\r\n", err); } if (Microphone_PDM::instance().start()) { ei_printf("Failed to start PDM!"); microphone_inference_end(); return false; } record_ready = true; return true; } /** * @brief Wait on new data * * @return True when finished */ static bool microphone_inference_record(void) { bool ret = true; if (inference.buf_ready == 1) { ei_printf( "Error sample buffer overrun. Decrease the number of slices per model window " "(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)\n"); ret = false; } while (inference.buf_ready == 0) { pdm_data_ready_inference_callback(); } inference.buf_ready = 0; return ret; } /** * Get raw audio signal data */ static int microphone_audio_signal_get_data(size_t offset, size_t length, float *out_ptr) { numpy::int16_to_float(&inference.buffers[inference.buf_select ^ 1][offset], out_ptr, length); return 0; } /** * @brief Stop PDM and release buffers */ static void microphone_inference_end(void) { Microphone_PDM::instance().stop(); free(inference.buffers[0]); free(inference.buffers[1]); free(sampleBuffer); } void buzz() { ei_printf("buzz\n"); for (int thisNote = 0; thisNote < 8; thisNote++) { int noteDuration = 1000 / noteDurations[thisNote]; tone(buzzerPin, melody[thisNote], noteDuration); delay(noteDuration * 1.30); noTone(buzzerPin); } } #if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_MICROPHONE #error "Invalid model for current sensor." #endif
Building and flashing firmware
To build the firmware and upload a binary to the Photon 2, Particle provides the Workbench Visual Studio Code extension. Please follow the instructions provided at the link below.
https://docs.particle.io/quickstart/workbench
- In Workbench, select Particle: Import Project and select the project.properties file from the project directory.
- Use Particle: Configure Project for Device and select deviceOS@5.5.0 and choose a target Photon 2 / P2.
- Compile with Particle: Compile application (local)
- Flash with Particle: Flash application & DeviceOS (local)
When the application starts running, we can view the inferencing logs in the serial monitor.
Predictions (DSP: 18 ms., Classification: 310 ms.): faucet: 0.99999 noise: 0.00001 Predictions (DSP: 18 ms., Classification: 310 ms.): faucet: 0.99996 noise: 0.00004 Predictions (DSP: 18 ms., Classification: 310 ms.): faucet: 1.00000 noise: 0.00000
Live Demo
Conclusion
This easy-to-build device is a game-changer for any household. Plugged into a standard outlet or powered by batteries, it fits seamlessly into various settings. With privacy at its core, all data processing occurs on the device, ensuring sensitive information stays secure. Timely notifications empower users to conserve resources, prevent costly mishaps, and enjoy peace of mind. Whether you’re caring for a senior loved one or managing a busy household, this smart solution keeps you in control.