Codetown ::: a software developer's community
I don't clearly catch the difference betwenn these two concept. Someone told me that the essential différence is that the cloud computing give you a large space of storage and the grig give more advantages than storage, we can profit to much power with this last.
Does any one know more clearly these two concept; and tell us?
Tags:
I don't claim to be the expert, but the difference is (I think) in use.
Grid represents a scalable framework. You write your algorithm and your code and use as much computing power as you wallet can afford. (Useful as some work can be highly parallelizable) .
Cloud computing offers storage (true) but it's also represents the applications as well. Ideally with cloud computing, you don't need to have certain applications on your desktop - as long as you can hit the cloud, you can get, update, and use your data.
Thanks thomas;
What I got :
Grid - much computing power and can be highly parallelizable
Cloud - Storage and dont need to have certain applications on your desktop ( that's just like server application?)
Someone can tell us more?
I think if you look at the history, you will understand some difference.
In my own experience, the grid began with Oracle using it as a type of metadatabase, which would point to multiple databases residing on different but uniform hardware systems. So if a company had multiple unix boxes and needed to increase the size of their database, instead of purchasing additional hardware they could implement the grid database and combine their multiple unix servers into one database resource.
Cloud is much more in terms of it offering not only a database, but also an entire server including the operating system.
The cloud exposes an operating system, whereas a grid exposes a database.
But I am no buzz word expert so I might be wrong.
I just talked to a buddy about this, essentially the Oracle Grid product is differant because it runs the DB in memory. So access times are a lot quicker. I don't think it is really a matter of Vs. so much as Grid computing is a way to handle db transactions in a faster way.
He said their grid servers had something like 72gbs of ram. Freaking crazy
Please Bradley, wha do you think about Jackie's reaction?
Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.
Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.
Check out the Codetown Jobs group.

Vercel has launched "react-best-practices," an open-source repository featuring 40+ performance optimization rules for React and Next.js apps. Tailored for AI coding agents yet valuable for developers, it categorizes rules based on impact, assisting in enhancing performance, bundle size, and architectural decisions.
By Daniel Curtis
The Kubernetes project recently announced a new core controller called the Node Readiness Controller, designed to enhance scheduling reliability and cluster health by making the API server’s view of node readiness more accurate.
By Craig Risi
Jim Gough discusses the transition from accidental architect to API program leader, explaining how to manage the complexity of secure API connectivity. He shares the Common Architecture Language Model (CALM), a framework designed to bridge the developer-security gap. By leveraging architecture patterns, engineering leaders can move from six-month review cycles to two-hour automated deployments.
By Jim Gough
Microsoft's Evals for Agent Interop is an open-source starter kit that enables developers to evaluate AI agents in realistic work scenarios. It features curated scenarios, datasets, and an evaluation harness to assess agent performance across tools like email and calendars.
By Edin Kapić
Pinterest launched a next-generation CDC-based database ingestion framework using Kafka, Flink, Spark, and Iceberg. The system reduces data availability latency from 24+ hours to 15 minutes, processes only changed records, supports incremental updates and deletions, and scales to petabyte-level data across thousands of pipelines, optimizing cost and efficiency.
By Leela Kumili
© 2026 Created by Michael Levin.
Powered by