Computing Cluster | 運算叢集

  • What is it?
    A computing cluster is a set of computers working together like a single system, providing better performance, availability, and cost efficiency than a comparable mainframe computer. It is distinguished from grid computing in that grid computers are assigned different tasks, whereas a computing cluster pools its combined computational power to achieve the same goal.

    Computers in the cluster, referred to as "nodes", may be consumer-grade products like a typical PC, or highly customized enterprise solutions. The prowess of the cluster is reflected in its number of nodes; in turn, these nodes are measured by their number of processors, as well as how many cores and threads are contained in each processor.

    Generally, there is a single "control" or "head" node, through which the user interfaces with the cluster. The remaining nodes are "compute nodes". All nodes are interconnected via high-speed networks. Cluster management software is used to schedule workloads and provide coordination.

  • Why do you need it?
    Establishing a computing cluster can be a major step toward achieving high performance computing (HPC), high availability (HA), or load balancing capabilities. The advantages are many, including faster processing speeds; larger storage capacities; better data security, scalability, and cost efficiency. Access to such a solution may improve productivity in both the public and private sectors.

    Public sector: Government agencies around the world have set up dedicated computing clusters to upgrade their services, especially with regard to public safety and welfare. Examples include simulating natural disasters to improve forecast accuracy and relief efforts; analyzing genomics to combat a dangerous pandemic; or even something as mundane as monitoring highway traffic so everyone can get home in time for the holidays.

    Private sector: Computing clusters have opened up boundless opportunities for businesses. Some examples include: more efficient exploration in the energy sector; greater security and profitability in the finance industry; more creative special effects and performances in showbiz. The ease with which a computing cluster can be scaled up and scaled out means a smaller company can set up a modest cluster early on, and then gradually expand the cluster as business grows.

  • How is GIGABYTE helpful?
    GIGABYTE's array of server solutions can be utilized as head or compute nodes within any computing cluster, depending on the user scenario. The H-Series High Density Servers and G-Series GPU Servers are well-suited for the head node, due to their industry-leading ultra-high density design, impressive storage capacity, and the outstanding performance of their Intel® Xeon® or AMD EPYC™ processors. The W-Series Tower Servers are excellent choices for the compute nodes, since they are housed inside stand-alone chassis that clients can customize or expand as their needs change. Rack Servers are also suitable for business-critical workloads; they come in a wide range of configurations, from 1U to 5U. The servers can be linked together via interconnects such as Ethernet, Infiniband, or Omni-Path.

  • WE RECOMMEND
    RELATED ARTICLES
    西班牙IFISC用技嘉伺服器 為新冠肺炎、氣候變遷尋求解方
    西班牙跨學科物理和複雜系統研究所,運用技嘉科技的先進伺服器產品,研究影響全人類的重大議題,包括:氣候變遷、環境污染、新冠肺炎疫情。所面對的運算問題複雜且多元,技嘉伺服器使命必達,因為,研究所使用的三款伺服器,適合進行高效能運算、數值模擬、發展人工智慧、管理和分析大數據。
    技嘉伺服器加持 巴塞隆納大學超前部屬全新運算叢集
    西班牙巴塞隆納大學理論與計算化學研究所,運用技嘉科技的伺服器籌設新的運算叢集,提升大學資料中心的運算能力高達四成以上。數以百計的研究人員受惠於AMD EPYC™處理器的強大運算能力,管理者可透過技嘉免費提供的技嘉伺服器管理軟體(GSM),遠端遙控多台伺服器,輕鬆管理新的運算叢集。
    成功案例:台師大建置雲端運算平台 開創科學研究更多可能
    高效能運算HPC技術在近代科學研究中扮演關鍵角色,國立臺灣師範大學理學院考量HPC的重要性及前瞻性發展,斥資以技嘉科技的伺服器籌設雲端運算平台,盼加速產出研究成果、培訓更多專業人才。
    暴風解碼 技嘉運算叢集協助早稻田大學研究氣候變遷
    日本早稻田大學有「全球災害防治中心」的美譽。它用技嘉科技的GPU協同運算伺服器和工作站組成運算叢集,用來研究並防治海嘯、暴潮等天然災害。科學團隊致力於了解氣候變遷所造成的影響,明日的熱帶氣旋,恐怕將會變得越來越劇烈。
    Back to top