Jim, this is your cue! From what you've described, we're all more involved in Computational Linguistics than we think. When a call is "recorded for training purposes", when you use voice activation, and more. Let's kick off with some basics. If you in Codetown have some definitions to add, now is the time.

Views: 196

Replies to This Discussion

Commercial applications of computational linguistics have been growing by leaps and bounds.  IBM created a new poster child for the field with their Jeopardy champion Watson, and most everyone has used Google Translate and/or Voice by now (and fewer yet will have escaped interacting with an Automated Voice Response system).  Commercial segments making significant use of NLP include web marketing, medicine, biomedical research, finance, law, and customer call centers.

In the most general terms, computational linguistics is applying computational methods to problems in linguistics.  Linguistics then is the study of human language in all its aspects.  Although it hasn't received a lot of press until recently, computational linguistics has been around pretty much since the development of the computer.  One of the first uses of digital computers (and a key impetus for their development) was in code breaking, which is an application of "compling" (I know, it looks like a typo for "compiling" but CL looks like Common LISP to me).  The Association for Computational Linguistics (ACL), the largest and oldest scientific and profssional society in the field, will hold its 50th annual conference next July.

Being such a broad field there are of course many specializations and various communities with differing objectives and vocabularies.  Folks primarily focused on engineering computer systems that process human language at a level deeper than simply character strings are generally under the Natural Language Processing (NLP) banner.  The commercial success of CL applications and the focus on some problems other than the traditional NLP ones (translation, text understanding, and speech recognition and generation) has spawned Text Analytics as another subfield that is closely associated with Business Intelligence (itself largely commercial application of machine learning methods).  Perhaps the biggest focus of Text Analytics has been on sentiment analysis, which assesses a speaker's attitude or mood in something they've said or written (we usually say "speaker" even when the medium is written or typed).  There are many businesses that use sentiment analysis on the web to find out what folks are saying about them and their products, in call centers for quality control, and in finance to predict future prices.  Applications in law and government include "e-discovery" and smart OCR systems.  Lastly, and far from leastly, is compling in the medical field and the specialized domain knowledge it calls for which is known as bioinformatics.  Bioinformatics may well be compling's "killer app" because of the tremendous opportunity to do good.  Answering technical questions for medical practitioners is the application IBM has targeted as Watson's "day job".

This is a big topic and I have lots I would like to say, so I intend to drop in here often.  So stay tuned!

~~~~ Jim White

 

On a similar note: This fall a colleague of mine informed me of a class being offered by two Standford professors. It is Introduction to Artifical Intelligence (ai-class.com).  I signed up for the course along with 138,000 other students.  It has been interesting to see some of the therories and formulas used to help the computer find the correct answer.  Prior to starting the class, I did not realize it is mostly related to statistics.  I just finished the midterm and will hopfully learn alot more as the course continues.   

Next semester Stanford is offering a wider variety of online courses. One of them is specifically on natural language processing:

 

http://www.nlp-class.org/

 

There are also some other courses that would make good follow-ups to the AI course.

 

Machine learning: http://jan2012.ml-class.org/

 

Probabilistic graphical models: http://www.pgm-class.org/ (These are the kinds of graphs you saw in the AI course - not about pictures)

 

Game theory: http://www.game-theory-class.org/

 

Excellent. Thanks, Eric!

Eric Lavigne said:

Next semester Stanford is offering a wider variety of online courses. One of them is specifically on natural language processing:

 

http://www.nlp-class.org/

 

There are also some other courses that would make good follow-ups to the AI course.

 

Machine learning: http://jan2012.ml-class.org/

 

Probabilistic graphical models: http://www.pgm-class.org/ (These are the kinds of graphs you saw in the AI course - not about pictures)

 

Game theory: http://www.game-theory-class.org/

 

RSS

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

How Meta is Using a New Metric for Developers: Diff Authoring Time

Diff Authoring Time (DAT). DAT is a new metric developed by engineers at Meta to measure the duration required for developers to submit changes, known as "diffs," to the codebase. By tracking the time from the initiation of a code change to its submission, DAT offers insights into the efficiency of the development process and helps identify areas for improvement.

By Craig Risi

Presentation: Unleashing Llama's Potential: CPU-based Fine-tuning

Anil Rajput and Rema Hariharan discuss the crucial role of CPU architecture in optimizing Large Language Model (LLM), specifically Llama, performance. They explain hardware-software synchronization for TCO reduction and latency improvements. Learn about core utilization, cache impact, memory bandwidth considerations, and the benefits of chiplet architecture for LLM deployments on CPUs.

By Anil Rajput, Rema Hariharan

Article: Bridging Modalities: Multimodal RAG for Advanced Information Retrieval

In this article, authors discuss how multi-model retrieval augmented generation (RAG) techniques can enhance AI by integrating multiple modalities like text, images, and audio for deeper contextual understanding, with help of a practical example of a healthcare application.

By Suruchi Shah, Suraj Dharmapuram

Podcast: Balancing Coupling in Software Design with Vlad Khononov

In this episode, Thomas Betts speaks with Vlad Khononov about balancing coupling in software design, the subject of his recent book. They discuss how coupling is necessary for a system to function, but has to be balanced to allow the system to evolve. Vlad identifies three factors that can be used to measure coupling: knowledge sharing, distance, and volatility.

By Vlad Khononov

Announcing QCon AI: Focusing on Practical, Scalable AI Implementation for Engineering Teams

QCon AI focuses on practical, real-world AI for senior developers, architects, and engineering leaders. Join us Dec 16-17, 2025, in NYC to learn how teams are building and scaling AI in production—covering MLOps, system reliability, cost optimization, and more. No hype, just actionable insights from those doing the work.

By Artenisa Chatziou

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service