Codetown ::: a software developer's community
At Measures for Justice (MFJ) our mission is to use data to transform how we measure, understand, and reform the criminal justice system in America. We collect, clean, code, standardize, and analyze data from criminal justice agencies to provide consistent, comparable, objective, and public performance measures across the whole criminal justice system, from arrest to post-conviction, on a county-by-county basis (see our Data Portal at https://measuresforjustice.org/portal/).
In 2017, MFJ educated the Florida legislature about how data transparency in criminal justice could be improved in that state. As a result, the state passed into law (Florida Statutes 900.05) a bill that mandates court clerks, state attorneys, jail administrators, public defenders, and the Department of Corrections to report data to the Florida Department of Law Enforcement (FDLE) on a monthly basis. MFJ is supporting the implementation of the new legislation through a pilot in the 6th Judicial Circuit (Pasco and Pinellas counties) that will embed at least one Data Fellow within the Clerk of Courts Office of each county.
See more details here: https://measuresforjustice.org/about/jobs/data-fellow.html
Tags:
Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.
Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.
Check out the Codetown Jobs group.

In this series, we examine what happens after the proof of concept and how AI becomes part of the software delivery pipeline. As AI transitions from proof of concept to production, teams are discovering that the challenge extends beyond model performance to include architecture, process, and accountability. This transition is redefining what constitutes good software engineering.
By Arthur Casals
To prevent agents from obeying malicious instructions hidden in external data, all text entering an agent's context must be treated as untrusted, says Niv Rabin, principal software architect at AI-security firm CyberArk. His team developed an approach based on instruction detection and history-aware validation to protect against both malicious input data and context-history poisoning.
By Sergio De Simone
Introducing Claude Cowork: Anthropic's groundbreaking AI agent revolutionizing file management on macOS. With advanced automation capabilities, it enhances document processing, organizes files, and executes multi-step workflows. Users must be cautious of backup needs due to recent issues. Explore its potential for efficient office solutions while ensuring data integrity.
By Andrew Hoblitzell
Meta has revealed how it scales its Privacy-Aware Infrastructure (PAI) to support generative AI development while enforcing privacy across complex data flows. Using large-scale lineage tracking, PrivacyLib instrumentation, and runtime policy controls, the system enables consistent privacy enforcement for AI workloads like Meta AI glasses without introducing manual bottlenecks.
By Leela Kumili
Researchers at MIT's CSAIL published a design for Recursive Language Models (RLM), a technique for improving LLM performance on long-context tasks. RLMs use a programming environment to recursively decompose and process inputs, and can handle prompts up to 100x longer than base LLMs.
By Anthony Alford
© 2026 Created by Michael Levin.
Powered by