Artificial Intelligence | 人工智慧

  • What is it?
    Artificial Intelligence (AI) is a broad branch of computer science. The goal of AI is to create machines that can function intelligently and independently, and that can work and react the same way as humans. To build these abilities, machines and the software & applications that enable them need to derive their intelligence in the same way that humans do – by retaining information and becoming smarter over time.

    AI is not a new concept – the idea has been in discussion since the 1950s – but it has only become technically feasible to develop and deploy into the real world relatively recently due to advances in technology – such as our ability to now collect and store huge amounts of data that are required for machine learning, and also the rapid increases in processing speeds and computing capabilities which make it possible to process the data collected to train a machine / application and make it "smarter".

  • Why do you need it?
    Although we tend to associate AI with the image of self-aware robot that can move, act and think just like a human being (courtesy of countless science fiction films), you could be already using AI more than you know – for example, YouTube or Netflix rely on AI to make user video recommendations, classify content or censor inappropriate material, and speech recognition or language translation platforms like Amazon Alexa or Google Translate also use AI to be able to better understand real-world speech or perform translation – and as users interact with these applications they become smarter by remembering user behavior or reactions. AI will be a key enabler of many technologies that are on the verge of being deployed into the mainstream, such as autonomous driving technology or flying drones used for package delivery. The importance of AI for these applications is the ability to be able to make decisions independently in real-time based on real world data, and to learn from this data and feedback from the user & environment to become more accurate over time.

  • How is GIGABYTE helpful?
    Currently one of the most widely adopted methods to develop artificial intelligence in machines and applications is with machine learning, and its advanced variant Deep Learning, which adopts Deep Neural Networks (DNN) models - complicated algorithms similar in structure and function to the human brain. Deep Learning requires not only a large amount of data (which can be stored and processed with GIGABYTE's Storage Servers and / or High Density Server), but also massive parallel computing power to train an algorithm based on this data. GIGABYTE's GPU Server (such as G481-S80 or G291-280) are ideal for this task.

    GIGABYTE also has developed a DNN Training Appliance, a fully integrated software and hardware stack built on our G481-HA1 server for hassle-free machine learning environment setup, management and monitoring, and includes hardware and software optimizations that reduce the time required and improve the accuracy of DNN training jobs.

  • WE RECOMMEND
    RELATED ARTICLES
    西班牙IFISC用技嘉伺服器 為新冠肺炎、氣候變遷尋求解方
    西班牙跨學科物理和複雜系統研究所,運用技嘉科技的先進伺服器產品,研究影響全人類的重大議題,包括:氣候變遷、環境污染、新冠肺炎疫情。所面對的運算問題複雜且多元,技嘉伺服器使命必達,因為,研究所使用的三款伺服器,適合進行高效能運算、數值模擬、發展人工智慧、管理和分析大數據。
    美國洛厄爾天文台 用技嘉伺服器搜索太陽系外的外星生命
    美國亞利桑那州的洛厄爾天文台,用技嘉科技的G482-Z50 GPU協同運算伺服器分析恆星發出的「太空噪音」,這有助於科學家更快找到太陽系外可居住的行星。強大的AMD EPYC™處理器、頂尖的平行運算能力、優越的可擴充性及領先業界的穩定特性,讓伺服器能勝任這項難如登天的任務。科學家們樂觀地表示,在我們有生之年,將找到外太空的「第二顆地球」。
    HDMI 2.1:支援極致4K/120Hz影像輸出 完整釋放RTX 30效能
    2021年初技嘉科技偕同顯示卡大廠Nvidia 於CES發表了最新搭載RTX 30 系列顯示晶片的筆電,不僅3D運算效能更上一層樓,全方位的連接埠更是讓它成為諸多競品中的佼佼者,其中特別導入最新HDMI 2.1影音傳輸接口,讓硬體玩家們無不趨之若鶩,本篇文章將帶您一同深入了解HDMI 2.1。
    Back to top