Design007 Magazine


Issue link:

Contents of this Issue


Page 64 of 111

NOVEMBER 2020 I DESIGN007 MAGAZINE 65 interconnect. The next layer is done by lami- nating another layer on top of the previous via. This can typically be done 3–4 times (or more, depending upon the fabricator). Then, the sur- face layer is planarized (made flat) so that the PCB is flat at assembly, and no "part rocking" will occur. Conclusion In this column, I repeatedly oversimplified both the function and process for vias. Ulti- mately, consult your chosen fabricator for more detail on capabilities and process limita- tions. Thanks for reading! DESIGN007 Mark Thompson, CID+, is a senior PCB technologist at Monsoon Solutions Inc. To read past columns or contact Thompson, click here. Thompson is also the author of The Printed Circuit Designer's Guide to… Producing the Perfect Data Package. Visit to download this book and other free, educational titles. electromagnetic fields. They contain a row or two, or even three rows, of vias spaced close enough together to form an electromagnetic wave barrier. Via fences can be used to shield microstrip and stripline transmission lines or functional circuits from each other. However, via fences too close to the line being guarded can degrade the isolation of the line/circuit. They can also be used around the periphery of a board to prevent electromag- netic interference with other equipment. 4. Stacked and Staggered Vias Stacked and staggered microvias are done with a laser (Nd:YAG or Nd:YLF) exactly as described earlier. Stacked vias are literally stacked upon each other by an additive pro- cess, and staggered vias are staggered so that they do not reside directly over each other. The advantage of stacking vias is extremely dense board designs, such as via-in-pad struc- tures within tight-pitch BGA footprints. For this, the vias are drilled with a laser and then plated, filled, and planarized to create the Amazon Web Services' first GPU instance debuted 10 years ago with the NVIDIA M2050. Since then, AWS has added to its stable of cloud GPU instances, which has included the K80 (p2), K520 (g3), M60 (g4), V100 (p3/ p3dn) and T4 (g4). With its new P4d available today, AWS is pav- ing the way for another bold decade of accelerated computing powered with the latest NVIDIA A100 Tensor Core GPU. T h e P 4 d d e l i ve rs AW S 's highest performance, most cost-effective GPU-based platform for machine learn- ing training and high-perfor- mance computing applications. They also provide exceptional infer- ence performance. In addition, the P4d instance is supported in many AWS services, including Amazon Elastic Container Services, Amazon Elastic Kubernetes Service, AWS ParallelCluster and Amazon SageMaker. P4d can also leverage all the optimized, containerized software avail- able from NGC, including HPC applications, AI frameworks, pre-trained models, Helm charts and inference soft- ware like TensorRT and Tri- ton Inference Server. The first decade of GPU cloud computing has brought over 100 exaflops of AI com- pute to the market. With the arrival of the Amazon EC2 P4d instance powered by NVIDIA A100 GPUs, the next decade of GPU cloud computing is off to a great start. (Source: NVIDIA Newsroom) NVIDIA A100 Marks Dawn of Next Decade in Accelerated Cloud Computing

Articles in this issue

Links on this page

Archives of this issue

view archives of Design007 Magazine - Design007-Nov2020