I don't clearly catch the difference betwenn these two concept. Someone told me that the essential différence is that the cloud computing give you a large space of storage and the grig give more advantages than storage, we can profit to much power with this last.

 

Does any one know more clearly these two concept; and tell us?

Views: 213

Replies to This Discussion

I don't claim to be the expert, but the difference is (I think) in use.

 

Grid represents a scalable framework.  You write your algorithm and your code and use as much computing power as you wallet can afford.  (Useful as some work can be highly parallelizable) .

 

Cloud computing offers storage (true) but it's also represents the applications as well.  Ideally with cloud computing, you don't need to have certain applications on your desktop - as long as you can hit the cloud, you can get, update, and use your data.  

Thanks thomas;

What  I got :

 

Grid - much computing power and can be highly parallelizable

 

Cloud - Storage and dont need to have  certain applications on your desktop ( that's just like server application?)

 

Someone can tell us more?

I think if you look at the history, you will understand some difference.

In my own experience, the grid began with Oracle using it as a type of metadatabase, which would point to multiple databases residing on different but uniform hardware systems.  So if a company had multiple unix boxes and needed to increase the size of their database, instead of purchasing additional hardware they could implement the grid database and combine their multiple unix servers into one database resource.

 

Cloud is much more in terms of it offering not only a database, but also an entire server including the operating system.

The cloud exposes an operating system, whereas a grid exposes a database.

 

But I am no buzz word expert so I might be wrong.

I just talked to a buddy about this, essentially the Oracle Grid product is differant because it runs the DB in memory. So access times are a lot quicker. I don't think it is really a matter of Vs. so much as Grid computing is a way to handle db transactions in a faster way.

 

He said their grid servers had something like 72gbs of ram. Freaking crazy

Please Bradley, wha do you think about Jackie's reaction?

RSS

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Presentation: The Time is Now: Delight Your Developers with User-Centric Platforms & Practices

Ana Petkovska discusses creating platform teams, establishing the team API, engagement of early adopters, easing adoption and providing a high quality product.

By Ana Petkovska

DeepSeek Open-Sources DeepSeek-V3, a 671B Parameter Mixture of Experts LLM

DeepSeek open-sourced DeepSeek-V3, a Mixture-of-Experts (MoE) LLM containing 671B parameters. It was pre-trained on 14.8T tokens using 2.788M GPU hours and outperforms other open-source models on a range of LLM benchmarks, including MMLU, MMLU-Pro, and GPQA.

By Anthony Alford

Article: A Framework for Building Micro Metrics for LLM System Evaluation

LLM accuracy is a challenging topic to address and is much more multi-dimensional than a simple accuracy score. Denys Linkov introduces a framework for creating micro metrics to evaluate LLM systems, focusing on goal-aligned metrics that improve performance and reliability. By adopting an iterative "crawl, walk, run" methodology, teams can incrementally develop observability.

By Denys Linkov

Google Releases Experimental AI Reasoning Model

Google has introduced Gemini 2.0 Flash Thinking Experimental, an AI reasoning model available in its AI Studio platform.

By Daniel Dominguez

Google Vertex AI Provides RAG Engine for Large Language Model Grounding

Vertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.

By Sergio De Simone

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service