<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.openwfm.org/index.php?action=history&amp;feed=atom&amp;title=Gross_cluster</id>
	<title>Gross cluster - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.openwfm.org/index.php?action=history&amp;feed=atom&amp;title=Gross_cluster"/>
	<link rel="alternate" type="text/html" href="https://wiki.openwfm.org/index.php?title=Gross_cluster&amp;action=history"/>
	<updated>2026-04-30T02:18:38Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.5</generator>
	<entry>
		<id>https://wiki.openwfm.org/index.php?title=Gross_cluster&amp;diff=4448&amp;oldid=prev</id>
		<title>Jmandel: Created page with &quot;{{legacy}} : &#039;&#039;See Gross documentation for usage instructions.&#039;&#039;  The compute nodes of this cluster have a gross (dozen dozen, 12*12=144) wiki...&quot;</title>
		<link rel="alternate" type="text/html" href="https://wiki.openwfm.org/index.php?title=Gross_cluster&amp;diff=4448&amp;oldid=prev"/>
		<updated>2021-10-16T16:03:11Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;{{legacy}} : &amp;#039;&amp;#039;See &lt;a href=&quot;/index.php?title=Gross_documentation&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;Gross documentation (page does not exist)&quot;&gt;Gross documentation&lt;/a&gt; for usage instructions.&amp;#039;&amp;#039;  The compute nodes of this cluster have a &lt;a href=&quot;http://en.wikipedia.org/wiki/Gross_(unit)&quot; class=&quot;extiw&quot; title=&quot;wikipedia:Gross (unit)&quot;&gt;gross&lt;/a&gt; (dozen dozen, 12*12=144) wiki...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{legacy}}&lt;br /&gt;
: &amp;#039;&amp;#039;See [[Gross documentation]] for usage instructions.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The compute nodes of this cluster have a [[wikipedia:Gross (unit)|gross]] (dozen dozen, 12*12=144) [[wikipedia:Multi-core processor|cores]]. The cluster was built by [http://aeoncomputing.com Aeon Computing] in Spring 2010.&lt;br /&gt;
&lt;br /&gt;
==Purpose==&lt;br /&gt;
&lt;br /&gt;
The primary purpose of the compute nodes is to serve for wildfire simulations for the [http://openwfm.org NSF CDI wildfires project], in particular as a back-end for web-initiated computations. The remaining capacity is available for academic research (including externally funded) and educational uses only. The grant was funded by NSF grant [http://www.nsf.gov/awardsearch/showAward?AWD_ID=0835579 0835579], Principal Investigator [[Jan Mandel]], with contributions from the Department of Mathematical and Statistical Sciences and the Center for Computational Mathematics.&lt;br /&gt;
&lt;br /&gt;
==Configuration==&lt;br /&gt;
&lt;br /&gt;
* 12 compute nodes with 2 [http://ark.intel.com/Product.aspx?id=47920 Intel X5670] [[wikipedia:Nehalem (microarchitecture)|Westmere]] CPUs with 6 cores each. Thus, each node is an [[wikipedia:Symmetric multiprocessing|SMP]] with 12 processors. Each compute node has 24GB memory.&lt;br /&gt;
* Front end with 2 Intel X5670 CPUs (total 12 cores), 144GB memory, and [http://www.nvidia.com/object/product_tesla_s1070_us.html NVIDIA Tesla C1070] supercomputing system for high-end virtual graphics rendering and GPU computing&lt;br /&gt;
* Storage server with 20 2TB disks, configured as [[wikipedia:Nested RAID levels#RAID 10 (RAID 1+0)|RAID 1+0]] array for 20TB effective capacity.&lt;br /&gt;
* QDR [[wikipedia:InfiniBand|Infiniband]], connecting the above components at 40 Gbit/s.&lt;br /&gt;
* Offsite 20TB backup storage server.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
* [[Gross user&amp;#039;s documentation]]&lt;br /&gt;
* [[Gross administrator&amp;#039;s documentation]]&lt;br /&gt;
&lt;br /&gt;
==Access==&lt;br /&gt;
&lt;br /&gt;
* Use of all CCM computing equipment including the Gross cluster is subject to U.S. Government [[Export controls]].&lt;br /&gt;
* Any math user can get a Gross cluster account on request. If the cluster gets overloaded so that the primary purpose (wildfire simulation) cannot be satisfied we&amp;#039;ll deal with that when that happens.&lt;br /&gt;
* Every Gross cluster user must belong to at least one project that the use of the cluster is requested for. Typically user accounts are requested by a permanent faculty member who acts as a project leader.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Every project leader must maintain an up-to-date [[Projects on the Gross cluster|wiki page for every project]] on this cluster as a condition of granting and continuing access.&amp;#039;&amp;#039;&amp;#039; The project page needs to include list of users, funding sources, list of publications resulting from the project with fulltext links, and a summary of major results. Use of images is encouraged. This information important for reporting to funding agencies as well as to UCD as well as to document compliance with the [[export controls]].&lt;br /&gt;
* To access the cluster, [[wikipedia:ssh|ssh]] command line: ssh to math.ucdenver.edu, then &amp;#039;&amp;#039;&amp;#039;ssh gross&amp;#039;&amp;#039;&amp;#039; from there.&lt;br /&gt;
* See [[Gross documentation]] for information on using the cluster.&lt;br /&gt;
&lt;br /&gt;
==Status== &lt;br /&gt;
&lt;br /&gt;
The cluster is available on request to users with existing shell accounts on &amp;#039;&amp;#039;&amp;#039;math.ucdenver.edu&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
: &amp;#039;&amp;#039;See [[Gross cluster performance]] and [[Gross cluster HPL benchmark]] for performance data.&lt;br /&gt;
* Max theoretical performance of one X5670 processor core is 2.93MHz * 4DP operations = 11.72 Gflops/core. Computational nodes total 144*11.72 = 1687.7 Gflops, including the head node 156*11.72 = 1828.3 Gflops &lt;br /&gt;
* [http://www.netlib.org/benchmark/hpl/ HPL benchmark] with [http://math-atlas.sourceforge.net/ ATLAS] [http://www.netlib.org/blas/ BLAS] 677 Gflops&lt;br /&gt;
* Sustained writes from a compute node to the storage server (100GB file) over Infiniband 557 MB/s&lt;br /&gt;
* MPI latency 1.85 us, 4.8 us (1kB messages)&lt;br /&gt;
* MPI node-to-node bandwidth 3.4 GB/s, bi-directional 6.5GB/s&lt;br /&gt;
&lt;br /&gt;
==Funding==&lt;br /&gt;
&lt;br /&gt;
The acquisition was funded from [[User:Jmandel|Jan Mandel&amp;#039;]]s grants and smaller part from other sources: &lt;br /&gt;
* NSF grant [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0835579 AGS 0835579]&lt;br /&gt;
* NFS grant [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0713876 DMS 0713876]&lt;br /&gt;
* [http://math.ucdenver.edu/ Department of Mathematical &amp;amp; Statistical Sciences]&lt;br /&gt;
* [http://ccm.ucdenver.edu/ Center for Computational Mathematics]&lt;br /&gt;
* Some parts (UPS, racks) were reused from the earlier [http://ccm.ucdenver.edu/beowulf/ Beowulf] cluster built in 2001 and funded by the NSF grant [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0079719 DMS 0079719]&lt;br /&gt;
* The NVIDIA Tesla C1070 system was donated by NVIDIA in 2009 for a [http://math.ucdenver.edu/~jmandel/classes/7924s09c GPU computing class] &lt;br /&gt;
 &lt;br /&gt;
The cluster is operated by and the operating costs covered by the Center for Computational Mathematics.&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
* See the list of [[projects on the Gross cluster]].&lt;br /&gt;
&lt;br /&gt;
==Real-time monitoring==&lt;br /&gt;
&lt;br /&gt;
* [http://ccm.ucdenver.edu/ganglia/gross/ Ganglia status page]&lt;br /&gt;
* [http://ccm.ucdenver.edu/watch_temp.log Temperature log]&lt;br /&gt;
* [http://ccm.ucdenver.edu/gross_status/nfsstat.html NFS status]&lt;br /&gt;
&lt;br /&gt;
[[Category:Gross cluster|Cluster]]&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
[[Category:Jan Mandel]]&lt;/div&gt;</summary>
		<author><name>Jmandel</name></author>
	</entry>
</feed>