Developing Rich Internet Applications (RIA’s) by combining Flex with Turbogears.

Fred Sells did this presentation for the OrlandoJUG. Here's the writeup and I'll attach the source code to this discussion so we can work through it. Stay tuned for the code!

"Flex is a (mostly) open source IDE from Adobe that uses seamlessly combines XML layout definitions with ActionScript programming to create Flash applications. A fairly robust application can be built using the xml layout definitions with minimal ActionScript programming. Flex applications support a wide variety of server-side API’s, including XML and JSON.

Turbogears is an open source web framework written in Python that is similar to Ruby on Rails. Turbogears supports all the major RDBMS’s and uses either SqlAlchemy or SqlObject to provide Object-Relationship-Mapping (ORM) to simplify server side coding. A basic web application can be implemented in just two files: a database model and a controller containing the business logic. Although Turbogears is primarily used with any one of several HTML templating engines, it also supports JSON."

This presentation will focus on rapid development of RIA’s using Flex on the client side with static XML files to simulate server-side responses, then migrate to JSON with a Turbogears backend."

BIO

Fred Sells is employed at Adventist Care Centers where he develops web applications in Python and Java. He has been programming in Python since 1990 and Java since 2000. Prior to this, he was founder and President of Sunrise Software International which developed ezX® a GUI-builder for the Unix environment. Fred has also consulted to The New York Stock Exchange and developed command and control software for the U.S. Navy. He is a graduate of Purdue University and currently working on an MS in Computer Information Science from Boston University.

Views: 30

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Presentation: The Time is Now: Delight Your Developers with User-Centric Platforms & Practices

Ana Petkovska discusses creating platform teams, establishing the team API, engagement of early adopters, easing adoption and providing a high quality product.

By Ana Petkovska

DeepSeek Open-Sources DeepSeek-V3, a 671B Parameter Mixture of Experts LLM

DeepSeek open-sourced DeepSeek-V3, a Mixture-of-Experts (MoE) LLM containing 671B parameters. It was pre-trained on 14.8T tokens using 2.788M GPU hours and outperforms other open-source models on a range of LLM benchmarks, including MMLU, MMLU-Pro, and GPQA.

By Anthony Alford

Article: A Framework for Building Micro Metrics for LLM System Evaluation

LLM accuracy is a challenging topic to address and is much more multi-dimensional than a simple accuracy score. Denys Linkov introduces a framework for creating micro metrics to evaluate LLM systems, focusing on goal-aligned metrics that improve performance and reliability. By adopting an iterative "crawl, walk, run" methodology, teams can incrementally develop observability.

By Denys Linkov

Google Releases Experimental AI Reasoning Model

Google has introduced Gemini 2.0 Flash Thinking Experimental, an AI reasoning model available in its AI Studio platform.

By Daniel Dominguez

Google Vertex AI Provides RAG Engine for Large Language Model Grounding

Vertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.

By Sergio De Simone

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service