There's a new book out called Programming F# What's F#? Why should you care? What's it gonna do for you?

Well, here's what you get when you become multi-lingual: you get more work! Are you a freelancer? Are you a little depressed with the state of the market these days? Does it blow? I can think of several similar adjectives to describe the state of affairs with opt-in work. What do I mean by opt-in? That's stuff that in-house staff can do without sacrificing year-end bonuses, holiday parties and perks to outside contractors. Given the choice, what would you do?

On the other hand, how's your Python? Check out the jobs here on the Python dot org jobs list. Ruby-ista? Do these make you feel better? And, Java dudes have some options these days, too. That's just a few languages, not to mention the .Net suite and a host of others.

Does that make you happier? How about if you're not a freelancer. You're in-house staff. You say, what's learning a new language going to do for me? Well, different languages have unique features. Pythonistas say "Life's better without braces" Ever hear that? Wonder what they're talking about?

All this stuff about functional languages is interesting. Now, Microsoft has come out with F#. What's the big deal about functional languages? One way to find out is to see some code. It truly broadens your horizons to learn new tricks. And, if you think your role is dull, try spicing it up with a new language that might cooperate with what you're running now.

Comments?

Views: 55

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

QCon SF: Database-Backed Workflow Orchestration Challenges Traditional Architecture

During QCon SF, Jeremy Edberg and Qian Li from DBOS presented a non-conventional architectural approach to workflow orchestration: treating PostgreSQL not just as a data store, but as the orchestration layer itself. Their talk addressed a persistent problem in distributed systems: workflows frequently fail, recovery mechanisms are complex, and visibility into workflow state remains challenging.

By Eran Stiller

AI-Generated Code Creates New Wave of Technical Debt, Report Finds

AI-generated code is “highly functional but systematically lacking in architectural judgment”, a new report from Ox Security has found. In a report released in late October called Army of Juniors: The AI Code Security Crisis, AI application security (AppSec) company Ox Security outlined 10 architecture and security anti-patterns that are commonly found in AI-generated code.

By Patrick Farry

CameraX 1.5 Brings Advanced Video Recording and Image Capture to Android

CameraX 1.5 introduces support for capturing slow-motion and high frame-rate videos as well as unprocessed, uncompressed still images. These capabilities are enabled by the new SessionConfig API, which streamlines camera setup and configuration.

By Sergio De Simone

First Keynote at QCon San Francisco 2025: Reducing Friction by Nicole Forsgren

At QCon SF 2025, Dr. Nicole Forsgren highlighted how AI accelerates code generation but reveals deployment bottlenecks, urging a strategic pivot to optimizing Developer Experience (DevEx). With 31% of developer time lost to friction, focusing on effective feedback loops, flow state, and cognitive load management is vital for competitive survival and retention.

By Steef-Jan Wiggers

New IBM Granite 4 Models to Reduce AI Costs with Inference-Efficient Hybrid Mamba-2 Architecture

IBM recently announced the Granite 4.0 family of small language models. The model family aims to deliver faster speeds and significantly lower operational costs at acceptable accuracy vs. larger models. Granite 4.0 features a new hybrid Mamba/transformer architecture that largely reduces memory requirements, enabling Granite to run on significantly cheaper GPUs and at significantly reduced costs.

By Bruno Couriol

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service