Information

Wikipedia Place

We've had a unique opportunity to speak with Brion Vibber recently at the October Orlandojug meeting. Let's discuss what we learned for those who couldn't attend and to expand on what we heard. We'll discuss technical and socio-cultural topics.

Members: 9
Latest Activity: Oct 27, 2011

About Wikipedia Place


Wikipedia is arguably the most popular and high volume site on the web. Brion Vibber has been the tech guy since the start. He described the initial architecture on Swampcast as just a couple of LAMP servers in Tampa. Since then the architecture has evolved and many lessons have been learned.

There are also workflow, ancillary sites, the community aspect and many other aspects of Wikipedia we can learn from and even influence in the future.

Let's talk about it here, where we all can ask questions and the discussions will have persistance.
(book photo from FromOldBooks)

Discussion Forum

Wikipedia Architecture - High Level View

The Wikipedia website began as a couple of LAMP servers. Brion described the wiki he chose and other details in the…Continue

Tags: vibber, performance, LAMP, swampcast, architecture

Started by Michael Levin Nov 2, 2009.

Wikipedia Place Reading List

Loading… Loading feed

Comment Wall

Comment

You need to be a member of Wikipedia Place to add comments!

 

Members (9)

 
 
 

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Presentation: Chatting with Your Knowledge Graph

Jonathan Lowe explains how to connect an LLM directly to a structured graph database using a rapid prototype. He demonstrates how to use sentence embeddings and semantic search to allow natural language queries to retrieve and analyze structured data. This approach enables a local LLM to answer complex questions by leveraging the relationships within a knowledge graph.

By Jonathan Lowe

Pinterest Unifies Engineering Tools with New Pinconsole Platform

Pinterest has introduced PinConsole, a unified internal developer platform (IDP) that centralizes engineering workflows. Built to address fragmented tools for deployment, monitoring, and service management, PinConsole provides a consistent layer that lets engineers focus on business logic instead of infrastructure complexity.

By Leela Kumili

How LinkedIn Built Enterprise Multi-Agent AI on Existing Messaging Infrastructure

LinkedIn extended its generative AI application platform to support multi-agent systems by repurposing its existing messaging infrastructure as an orchestration layer. This allowed the company to scale AI agents without building new coordination technology from scratch and achieve global availability while supporting complex multi-step workflows through agent coordination.

By Eran Stiller

Podcast: Scaling Systems, Companies, and Careers with Suhail Patel

In this episode, Suhail Patel joins Thomas Betts for a discussion about growing yourself as your company grows. When he started at Monzo, Patel was one of four engineers on the then new platform team–there are now over 100 people. The conversation covers how to thrive when the company and the systems you’re building are going through major growth.

By Suhail Patel

Hugging Face Releases FinePDFs: A 3-Trillion-Token Dataset Built from PDFs

Hugging Face has unveiled FinePDFs, the largest publicly available corpus built entirely from PDFs. The dataset spans 475 million documents in 1,733 languages, totaling roughly 3 trillion tokens. At 3.65 terabytes in size, FinePDFs introduces a new dimension to open training datasets by tapping into a resource long considered too complex and expensive to process.

By Robert Krzaczyński

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service