Monday, November 21, 2022

Internet Radio with Malayalam Channels



In today's digital age, creating an internet radio has become easier than ever before. With just a few simple steps, you can set up your own radio station and broadcast your favorite tunes to the world. In this article, we'll show you how to create an internet radio using HTML, with the help of external radio stations.


Getting Started

To get started with your internet radio, you will need basic knowledge of HTML, CSS, and JavaScript. You will also need a web server to host your radio station. You can either use a free web hosting service or a paid one, depending on your budget.

Next, you need to find external radio stations that you want to broadcast on your radio. There are several online directories where you can find radio stations based on genre or location. You can also search for the radio station's URL directly.

Adding Your Own Audio Content

In addition to broadcasting external radio stations, you can also add your own audio content to your internet radio. To do this, you will need to convert your audio files to a compatible format, such as MP3 or AAC. You can then host these files on your web server and add them to your HTML page using the same <audio> tag as before.

Promoting Your Internet Radio

Once your internet radio is up and running, it's time to promote it to the world. You can start by sharing your radio station on social media platforms like Facebook, Twitter, and Instagram. You can also submit your radio station to online radio directories to reach a wider audience.

You can check my radio player at https://fm.aravindnc.com/





Tuesday, March 1, 2022

Reviving the Past: Deep Learning Transforms Old Videos into Vibrant Colors

Have you ever come across old black and white videos and wondered what they would look like if they were in color? With the advancement of deep learning, it is now possible to transform black and white videos into colored ones with stunning accuracy.

My new project is about colorizing black and white evergreen songs. The colors may not be perfect since this is not a studio level film roll remastering, but we can get the look and feel of how the video will be if shot with a color camera.

The process of colorizing old videos involves training deep neural networks to predict the colors of each pixel in a grayscale image. The neural network is trained on a large dataset of images to learn the relationship between the grayscale and color images. Once trained, the neural network can be used to colorize new grayscale images or videos.

One of the most popular methods for colorizing videos is to use a technique called "frame interpolation." This technique involves using a neural network to predict the colors of intermediate frames between two existing frames. This results in smoother transitions between frames and a more natural-looking colorization.

Another approach is to use a technique called "temporal coherence." This technique involves ensuring that the colorization of one frame is consistent with the colorization of the surrounding frames. This helps to prevent color flickering and produces a more visually pleasing result.

One of the benefits of using deep learning to colorize old videos is that it can be done automatically and in real-time. This makes it possible to colorize old movies and TV shows, bringing them to life in a way that was never before possible. It also allows us to see historical events and footage in a new light, with colors that were previously hidden.

However, there are also some limitations to this technology. One of the challenges is that the neural network may make errors in predicting the colors of certain objects or regions in the video. For example, if there are no reference images of a particular car model, the neural network may not be able to accurately predict its color. Another challenge is that the colorization process may introduce artifacts or noise into the video.

Despite these challenges, the technology for colorizing old videos using deep learning is rapidly advancing. It is now possible to produce colorized videos that are visually stunning and incredibly realistic. With further improvements to the technology, we may soon be able to see the world in a whole new way, with old videos and images transformed into vibrant and colorful representations of the past.

This is my first try,

Song: Kalyani Kalavani…
Movie: Anubhavangal Paalichakal

Here is the final out,

Movie: Shree 420 (1955)

Song: Mera Joota Hai Japani


Song: Saranamayyappa

Artist: K. J. Yesudas, Chorus
Album: Chembarathi

More videos will be available in below playlist,
Remastered Videos

Sunday, February 20, 2022

Breaking Language Barriers: How Deep Learning Automates Language Conversion of Comics

My next project is about converting comics to my native language. Since manual work is a time consuming, we are discussing about automating this. So took my first comic, Doctor Strange(English), and lets talk about the steps involved in this.

Comics are a form of visual storytelling that have gained immense popularity over the years. However, one of the challenges in reading comics is that they are often published in a single language, making them inaccessible to people who don't understand that language. But with the help of deep learning, it is now possible to automate the language conversion of comics, making them accessible to a wider audience.

The process of automating language conversion of comics involves training deep neural networks to recognize and translate text from one language to another. The neural network is trained on a large dataset of comics and their translations to learn the relationship between the text and the images. Once trained, the neural network can be used to automatically translate the text in new comics.

One of the challenges in automating language conversion of comics is that the text is often integrated with the images. This means that the neural network needs to be able to recognize and extract the text from the images. One approach to addressing this challenge is to use Optical Character Recognition (OCR) technology, which can recognize and extract text from images.

Another challenge is that different languages may have different sentence structures and word orders, making it difficult for the neural network to accurately translate the text. To address this, the neural network can be trained on a larger dataset of translations to improve its accuracy.

One of the benefits of automating language conversion of comics is that it can be done automatically and in real-time. This makes it possible for publishers to translate their comics into multiple languages without the need for manual translation. It also makes comics more accessible to people who may not have access to translations, such as those living in remote areas or those with visual impairments.

However, there are also some limitations to this technology. One of the challenges is that the neural network may make errors in translating certain words or phrases, especially those with multiple meanings. Another challenge is that the translated text may not always fit seamlessly with the images, which can be distracting for readers.

Despite these challenges, the technology for automating language conversion of comics using deep learning is rapidly advancing. It is now possible to produce translated comics that are visually stunning and accurate in their translations. With further improvements to the technology, we may soon be able to enjoy comics in multiple languages, bringing new audiences to this beloved form of storytelling.

In conclusion, automating language conversion of comics using deep learning is a promising technology that has the potential to revolutionize the comic industry. By making comics more accessible to a wider audience, we can foster a greater appreciation for this unique form of storytelling and bring people together across language barriers.

This is the page we are going to translate,

Step 1: Detection of text, conversation: This involves passing the image through a deep learning model to identify all the balloons in it.

Step 2: OCRing, So we have the location of all text boxes with us. So we will take each of them and pass through it via an OCR. The quality of OCR depends on the quality of the source image, we can use Tesseract, Abby, Google or any OCR as needed.

So we will get something like this,

Step 3: Translation is the next step. We can use any kind of language translators available for translation.

At this point, the translation may not be perfect. So we cannot fully depend on a translator and s we need to tune it or we should create our own translation model for each type of comic and then do the translation. Different comics use different own kind of dialogues/phrases and content delivery, like for Amar Chitra Katha the English’s content will be different and for Marvel comics it will be exactly different. So we cannot convert them based on any available conversational models for a professional output, but automated translation works for the time being.

Step 4: Text Masking: So now we know where are out texts, so the next step is to mask all the texts in it. This uses AI models to detect text in the balloons that we detected, and remove them. OR just clear all the balloons which may not be perfect since balloons can be of any shape.

Step 5: Replace original text, we can now paste our translated contents over this bubbles.

So yes we have the comic ready.

Here I’m adding the converted comic (Just a couple of pages only.)

Original: Dr. Strange from Marvel

Translated: Dr. Strange From Marvel – Malayalam Translated

Here is one more,

This I did in a hurry (ignore the imperfect mask removal), but it does the job.

Original: Arjun Unicrystal
Translated: Arjun – Unicrystal – Malayalam Translated

So if you want to convert any comics can P.M me, all we need is a person (manual translator) who can verify the automatic translated content and do the correction.

Thanks for reading.

Popular Posts