<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView &amp;noscript=1">
Get Started
Get Started
    Webcast

    Classification of Audio Quality Using Machine Learning and Artificial Intelligence

    IEEE-BTS-Webcast

    Original Air Date: Wednesday 1pm 03/24/21
     
     

     

    Speaker - Tyler Morris, Tufts Graduate School of Electrical Engineering, Undergraduate WPI Electrical and Computer Engineering  

    image1

    Bio - Tyler Morris is a 22 year old Engineer from Boston, Massachusetts who is currently completing his Masters of Electrics Engineering at Tufts Graduate School in Spring 2021 and received his Bachelor’s Degree in Electrical and Computer Engineering from WPI. At Tufts, Morris has been working closely with Dean Dr. Panetta to combine the field of Machine Learning with Morris’ passion, Audio Engineering.  

     Morris has designed audio effects for the likes of Joe Bonamassa, Conan O’Brien, Brad Whitford (Aerosmith), Elliot Easton (The Cars), Brian May (Queen), Warren Haynes and others. Tyler Morris Designs (TMD) just released their consumer line of pedals with press releases from Premier Guitar, Guitar World and Music Radar.  
     
    Also a notable musician,  Morris has performed with music industry notables including Sammy Hagar, Steve Vai, Walter Trout, Christone “Kingfish” Ingram, Ronnie Earl, Yngwie Malmsteen, Leslie West, Robben Ford, Ronnie Montrose and many others.  
     
    Morris is an endorsed artist by Gibson Guitars, Marshall Amps, Fishman and many other companies, aiding them in promotion and design of new innovative musical products.  

    Lastly, Tyler Morris has released four internationally acclaimed albums, with the most recent, “Living in The Shadows”, debuting at #3 on the Billboard and iTunes Blues Charts. His passion for Engineering, music and audio has allowed him to succeed in numerous innovations developed from his unique drive and passion.  

    Co Author -  Dr. Karen Panetta, Dean of Graduate Education for the School of Engineering, Tufts University, Professor, Electrical & Computer Engineering, Adjunct Professor of Computer Science and Mechanical Engineering 

    Abstract -The field of Audio Engineering has long relied on finely tuned ears of experts to accomplish the task of determining proper microphone location, thus maximizing audio quality. As home studio applications become increasingly popular, the need for a consumer solution is more apparent than ever. Meanwhile, the fields of Machine Learning and Artificial Intelligence have in recent years been responsible for tackling many diverse problems that were previously thought to be near impossible. With a recent worldwide pandemic increasing the demand for quality home audio applications, the authors of this paper have decided to investigate audio quality and determine if it is possible for non-human ears to classify and quantify an audio recording’s sound as “good” or “bad”. While Machine Learning and Artificial Intelligence have been used before in some audio applications, they have never been used in the manner that the author is proposing, which is to quantify audio recording quality. In this paper, the authors will discuss similar research that has been conducted, recall their development of a unique approach to solve the problem, analyze experiments that were performed in order to extract features from collected data, compare and contrast multiple methods of classifying test data and, finally, discuss future applications of this research that extend to many fields.