This report on GWT, "The Future of GWT", will be interesting to developers, architects and managers, too. You'll learn details about GWT's usability, its competitors and even opinions as to how it's going to stand up against Dart.

Over 1300 respondents provided data. Overall, GWT is looked upon highly by developers mainly because it targets multiple browsers at once and because it reduces hand coding of Javascript. Slow compile times were a major complaint. These comments are pretty obvious to anyone familiar with GWT, but useful to newcomers. The report digs much deeper though, so experienced developers will learn some things by seeing what a good size survey respondent sample thinks. 

 

Here's a preview of what you'll see in the report:

You'll have to provide your name and email address to get a copy, but I think it's fair since the folks at Vaadin worked hard to provide it along with the other big contributors. And, thanks to Dave Booth for bringing this info to Codetown. If you have questions, Dave's your direct link to the group that put the report together. Check it out here.

Views: 210

Comment

You need to be a member of Codetown to add comments!

Join Codetown

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Article: Beyond One-Click: Designing an Enterprise-Grade Observability Extension for Docker

Docker Extensions boost developer speed but create a "visibility gap" by isolating telemetry. To meet enterprise needs, extensions must act as bridges to centralized platforms. This article details how to use OpenTelemetry, policy-as-code, and encryption to build secure pipelines. Learn to balance developer productivity with the governance required for scalable, compliant observability.

By Pragya Keshap

Airbnb Migrates High-Volume Metrics Pipeline to OpenTelemetry

Airbnb's observability engineering team has published details of a large-scale migration away from StatsD and a proprietary Veneur-based aggregation pipeline toward a modern, open-source metrics stack built on OpenTelemetry Protocol (OTLP), the OpenTelemetry Collector, and VictoriaMetrics' vmagent. The resulting system now ingests over 100 million samples per second in production.

By Claudio Masolo

Google Released Gemma 4 with a Focus On Local-First, On-Device AI Inference

With the release of Gemma 4, Google aims to enable local, agentic AI for Android development through a family of models designed to support the entire software lifecycle, from coding to production.

By Sergio De Simone

Lyft Scales Global Localization Using AI and Human-in-the-Loop Review

Lyft has implemented an AI-driven localization system to accelerate translations of its app and web content. Using a dual-path pipeline with large language models and human review, the system processes most content in minutes, improves international release speed, ensures brand consistency, and handles complex cases like regional idioms and legal messaging efficiently.

By Leela Kumili

Presentation: Reimagining Platform Engagement with Graph Neural Networks

Mariia Bulycheva discusses the transition from classic deep learning to GNNs for Zalando's landing page. She explains the complexities of converting user logs into heterogeneous graphs, the "message passing" training process, and the technical pitfalls of graph data leakage. She shares how a hybrid architecture solved inference latency, delivering contextual embeddings to a downstream model.

By Mariia Bulycheva

© 2026   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service