[1]
Ahmad Abdelfattah, Willow Ahrens, Hartwig Anzt, Chris Armstrong, Ben Brock, Aydin Buluc, Federico Busato, Terry Cojean, Tim Davis, Jim Demmel, Grace Dinh, David Gardener, Jan Fiala, Mark Gates, Azzam Haider, Toshiyuki Imamura, Pedro Valero Lara, Jose Moreira, Sherry Li, Piotr Luszczek, Max Melichenko, Jose Moeira, Yvan Mokwinski, Riley Murray, Spencer Patty, Slaven Peles, Tobias Ribizel, Jason Riedy, Siva Rajamanickam, Piyush Sao, Manu Shantharam, Keita Teranishi, Stan Tomov, Yu-Hsiang Tsai, and Heiko Weichelt. Interface for sparse linear algebra operations, 2024. [ bib | arXiv | http ]
The standardization of an interface for dense linear algebra operations in the BLAS standard has enabled interoperability between different linear algebra libraries, thereby boosting the success of scientific computing, in particular in scientific HPC. Despite numerous efforts in the past, the community has not yet agreed on a standardization for sparse linear algebra operations due to numerous reasons. One is the fact that sparse linear algebra objects allow for many different storage formats, and different hardware may favor different storage formats. This makes the definition of a FORTRAN-style all-circumventing interface extremely challenging. Another reason is that opposed to dense linear algebra functionality, in sparse linear algebra, the size of the sparse data structure for the operation result is not always known prior to the information. Furthermore, as opposed to the standardization effort for dense linear algebra, we are late in the technology readiness cycle, and many production-ready software libraries using sparse linear algebra routines have implemented and committed to their own sparse BLAS interface. At the same time, there exists a demand for standardization that would improve interoperability, and sustainability, and allow for easier integration of building blocks. In an inclusive, cross-institutional effort involving numerous academic institutions, US National Labs, and industry, we spent two years designing a hardware-portable interface for basic sparse linear algebra functionality that serves the user needs and is compatible with the different interfaces currently used by different vendors. In this paper, we present a C++ API for sparse linear algebra functionality, discuss the design choices, and detail how software developers preserve a lot of freedom in terms of how to implement functionality behind this API.
[2]
Jeffrey Young, Patrick Lavin, Jason Riedy, and Srinivas Eswar. Exploring graph analysis for hpc with near-memory accelerators. In IEEE High Performance Extreme Computing (HPEC), September 2022. https://crnch-rg.gitlab.io/pearc-2019/,. [ bib | http ]
[3]
James Demmel, Jack Dongarra, Mark Gates, Greg Henry, Julien Langou, Xiaoye Li, Piotr Luszczek, Weslley Pereira, Jason Riedy, and Cindy Rubio-González. Proposed consistent exception handling for the BLAS and LAPACK. CoRR, 2022. [ bib | arXiv | http ]
Numerical exceptions, which may be caused by overflow, operations like division by 0 or sqrt(-1), or convergence failures, are unavoidable in many cases, in particular when software is used on unforeseen and difficult inputs. As more aspects of society become automated, e.g., self-driving cars, health monitors, and cyber-physical systems more generally, it is becoming increasingly important to design software that is resilient to exceptions, and that responds to them in a consistent way. Consistency is needed to allow users to build higher-level software that is also resilient and consistent (and so on recursively). In this paper we explore the design space of consistent exception handling for the widely used BLAS and LAPACK linear algebra libraries, pointing out a variety of instances of inconsistent exception handling in the current versions, and propose a new design that balances consistency, complexity, ease of use, and performance. Some compromises are needed, because there are preexisting inconsistencies that are outside our control, including in or between existing vendor BLAS implementations, different programming languages, and even compilers for the same programming language. And user requests from our surveys are quite diverse. We also propose our design as a possible model for other numerical software, and welcome comments on our design choices.
[4]
Emory Smith, Shannon Kuntz, Jason Riedy, and Martin Deneroff. Concurrent graph queries on the lucata pathfinder. CoRR, 2022. [ bib | arXiv | http ]
High-performance analysis of unstructured data like graphs now is critical for applications ranging from business intelligence to genome analysis. Towards this, data centers hold large graphs in memory to serve multiple concurrent queries from different users. Even a single analysis often explores multiple options. Current computing architectures often are not the most time- or energy-efficient solutions. The novel Lucata Pathfinder architecture tackles this problem, combining migratory threads for low-latency reading with memory-side processing for high-performance accumulation. One hundred to 750 concurrent breadth-first searches (BFS) all achieve end-to-end speed-ups of 81 % to 97 % over one-at-a-time queries on a graph with 522M edges. Comparing to RedisGraph running on a large Intel-based server, the Pathfinder achieves a 19× speed-up running 128 BFS queries concurrently. The Pathfinder also efficiently supports a mix of concurrent analyses, demonstrated with connected components and BFS.
[5]
Jason Riedy. Programming on the Lucata data-first architecture. In Boston Area Architecture Workshop (BARC), January 2022. Keynote. [ bib | .pdf ]
[6]
Jason Riedy and Shannon Kuntz. Lightning talks: Updates/news from the GraphBLAS implementers. LAGraph meeting, October 2021. [ bib | http ]
[7]
Jason Riedy. Lightning talks: Updates/news from the GraphBLAS implementers. HPEC GraphBLAS BoF, September 2021. [ bib | http ]
[8]
Eric R. Hein, Srinivas Eswar, Abdurrahman Yaşar, Jiajia Li, Jeffrey S. Young, Thomas M. Conte, Ümit V. Çatalyürek, Richard Vuduc, Jason Riedy, and Bora Uçar. Programming strategies for irregular algorithms on the emu chick. ACM Trans. Parallel Comput., 7(4), October 2020. [ bib | DOI ]
The Emu Chick prototype implements migratory memory-side processing in a novel hardware system. Rather than transferring large amounts of data across the system interconnect, the Emu Chick moves lightweight thread contexts to near-memory cores before the beginning of each remote memory read. Previous work has characterized the performance of the Chick prototype in terms of memory bandwidth and programming differences from more typical, non-migratory platforms, but there has not yet been an analysis of algorithms on this system.This work evaluates irregular algorithms that could benefit from the lightweight, memory-side processing of the Chick and demonstrates techniques and optimization strategies for achieving performance in sparse matrix-vector multiply operation (SpMV), breadth-first search (BFS), and graph alignment across up to eight distributed nodes encompassing 64 nodelets in the Chick system. We also define and justify relative metrics to compare prototype FPGA-based hardware with established ASIC architectures. The Chick currently supports up to 68x scaling for graph alignment, 80 MTEPS for BFS on balanced graphs, and 50% of measured STREAM bandwidth for SpMV.
Keywords: EMU architecture
[9]
Jason Riedy. GraphBLAS and Emus. IEEE HPEC GraphBLAS BoF, September 2020. [ bib | http ]
[10]
Jason Riedy. Graph analysis and novel architectures. CERFACS Sparse Days, September 2020. [ bib | http ]
[11]
Jason Riedy. Potential directions for moving ieee-754 forward. NSF ICERM Workshop on Variable Precision in Mathematical and Scientific Computing, May 2020. [ bib | .pdf ]
[12]
Jason Riedy, James Demmel, and Peter Ahrens. Reproducible linear algebra from application to architecture. SIAM Parallel Processing for Scientific Computing, February 2020. [ bib | http ]
[13]
Patrick Lavin, Jeffrey Young, Richard Vuduc, Jason Riedy, Aaron Vose, and Daniel Ernst. Evaluating gather and scatter performance on cpus and gpus. The International Symposium on Memory Systems (MEMSYS), Sep 2020. [ bib | DOI | http ]
[14]
Jeffrey Young, Jason Riedy, Tom Conte, Vivek Sarkar, Prasanth Chatarasi, and Srisehan Srikanth. Experimental insights from the Rogues Gallery testbed. In IEEE International Conference on Rebooting Computing (ICRC19), San Mateo, CA, November 2019. [ bib | DOI ]
[15]
David Donofrio and Jason Riedy. Specializing architectures for data analytics. ARM Research Summit BOF on High Performance Graph Analytics: Algorithms, Programming, Architectures, September 2019. Introduction to invited panel on "We can't build specialized architectures for graphs that can work efficiently with other workloads, so we just need to hand-optimize each and every algorithm for each and every architecture". [ bib | http ]
[16]
Chunxing Yin and Jason Riedy. Concurrent Katz centrality for streaming graphs. In The IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, September 2019. [ bib | DOI ]
Keywords: hpda, graph analysis, parallel algorithm
[17]
Jeffrey Young, Eric Hein, Srinivas Eswar, Patrick Lavin, Jiajia Li, Jason Riedy, Richard Vuduc, and Thomas M. Conte. A microbenchmark characterization of the Emu Chick. Parallel Computing, September 2019. [ bib | DOI ]
The Emu Chick is a prototype system designed around the concept of migratory memory-side processing. Rather than transferring large amounts of data across power-hungry, high-latency interconnects, the Emu Chick moves lightweight thread contexts to near-memory cores before the beginning of each memory read. The current prototype hardware uses FPGAs to implement cache-less “Gossamer” cores for doing computational work and a stationary core to run basic operating system functions and migrate threads between nodes. In this multi-node characterization of the Emu Chick, we extend an earlier single-node investigation of the the memory bandwidth characteristics of the system through benchmarks like STREAM, pointer chasing, and sparse matrix-vector multiplication. We compare the Emu Chick hardware to architectural simulation and an Intel Xeon-based platform. Our results demonstrate that for many basic operations the Emu Chick can use available memory bandwidth more efficiently than a more traditional, cache-based architecture although bandwidth usage suffers for computationally intensive workloads like SpMV. Moreover, the Emu Chick provides stable, predictable performance with up to 65% of the peak bandwidth utilization on a random-access pointer chasing benchmark with weak locality.
[18]
Chunxing Yin and Jason Riedy. A new algorithm model for massive-scale streaming graph analysis. International Congress on Industrial and Applied Mathematics, July 2019. [ bib | http ]
[19]
Jason Riedy, James Demmel, and Peter Ahrens. Reproducible linear algebra from application to architecture. International Congress on Industrial and Applied Mathematics, July 2019. [ bib | http ]
[20]
E. Jason Riedy and Jeffrey S. Young. Programming novel architectures in the post-Moore era with the Rogues Gallery. In Practice and Experience in Advanced Research Computing (PEARC), Chicago, IL, July 2019. https://crnch-rg.gitlab.io/pearc-2019/. [ bib | http ]
[21]
Will Powell, Jason Riedy, Jeffrey S. Young, and Tom Conte. Wrangling Rogues: A case study on managing experimental post-Moore architectures. In Practice and Experience in Advanced Research Computing (PEARC '19), Chicago, IL, July 2019. [ bib | DOI ]
[22]
E. Jason Riedy and Jeffrey S. Young. Programming novel architectures in the post-Moore era with the Rogues Gallery. In 24th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Providence, RI, April 2019. https://crnch-rg.gitlab.io/asplos-2019/. [ bib | http ]
[23]
Jason Riedy, Jeffrey Young, and Tom Conte. Novel architectures for applications in data science and beyond. SIAM Conference on Computational Science and Engineering, March 2019. Minisymposium organizer with Jeffrey Young and Tom Conte. [ bib | http ]
[24]
Mark Gates, James W. Demmel, Greg Henry, Xiaoye S. Li, E. Jason Riedy, and Peter Tang. A proposal for next-generation BLAS. SIAM Conference on Computational Science and Engineering, February 2019. [ bib | http ]
[25]
E. Jason Riedy. Characterization of Emu with microbenchmarks. Emu Workshop at the Laboratory for Physical Sciences, January 2019. [ bib ]
[26]
Eric R. Hein, Srinivas Eswar, Abdurrahman Yasar, Jiajia Li, Jeffrey S. Young, Thomas M. Conte, Ümit V. Çatalyürek, Rich Vuduc, E. Jason Riedy, and Bora Uçar. Programming strategies for irregular algorithms on the Emu Chick. CoRR, abs/1901.02775, 2019. [ bib | http ]
The Emu Chick prototype implements migratory memory-side processing in a novel hardware system. Rather than transferring large amounts of data across the system interconnect, the Emu Chick moves lightweight thread contexts to near-memory cores before the beginning of each remote memory read. Previous work has characterized the performance of the Chick prototype in terms of memory bandwidth and programming differences from more typical, non-migratory platforms, but there has not yet been an analysis of algorithms on this system. This work evaluates irregular algorithms that could benefit from the lightweight, memory-side processing of the Chick and demonstrates techniques and optimization strategies for achieving performance in sparse matrix-vector multiply operation (SpMV), breadth-first search (BFS), and graph alignment across up to eight distributed nodes encompassing 64 nodelets in the Chick system. We also define and justify relative metrics to compare prototype FPGA-based hardware with established ASIC architectures. The Chick currently supports up to 68x scaling for graph alignment, 80 MTEPS for BFS on balanced graphs, and 50 % of measured STREAM bandwidth for SpMV.
[27]
IEEE 754 Committee. IEEE standard for floating-point arithmetic. IEEE Std 754-2019, Microprocessor Standards Committee of the IEEE Computer Society, New York, NY, 2019. (committee member and contributor). [ bib | www: ]
Keywords: IEEE standards;floating point arithmetic;programming;IEEE standard;arithmetic formats;computer programming;decimal floating-point arithmetic;754-2008;NaN;arithmetic;binary;computer;decimal;exponent;floating-point;format;interchange;number;rounding;significand;subnormal
[28]
E. Jason Riedy, Greg Henry, James Demmel, Mark Gates, Xiaoye S. Li, and Ping Tak P. Tang. Updated proposal for a next-generation BLAS. Batched, Reproducible, and Reduced Precision BLAS Birds-of-a-Feather at the International Conference for High Performance Computing, Networking, Storage and Analysis, November 2018. [ bib | .pdf ]
The classic BLAS interface is concise and mostly predictable. The BLAS Technical Forum produced a 301-page document in 2001 that incorporated mixed precision and extended operations. And now we face different implementations for reproducibility, even more precisions, and the batched interfaces. The explosion of interfaces causes problems for platform optimization and interface generation. The "Next-Generation BLAS Proposal" provides a unified naming scheme and semantic requirements for extensions. Inspired by the BLIS project, we also consider a minimal set of microkernels to provide a smaller optimization surface.
Keywords: linear algebra, blas
[29]
James Demmel, Jason Riedy, and Peter Ahrens. Reproducible BLAS: Make addition associative again! SIAM News, 51(8):8, October 2018. [ bib | http ]
Keywords: linear algebra, floating point, ieee754
[30]
Jason Riedy. Plans for IEEE standard 754-2028. In 25th IEEE Symposium on Computer Arithmetic (ARITH 25), June 2018. Invited talk. [ bib | http ]
Keywords: ieee754, floating point, memory centric, linear algebra
[31]
Jason Riedy and James Demmel. Augmented arithmetic operations proposed for IEEE-754 2018. In 25th IEEE Symposium on Computer Arithmetic (ARITH 25), June 2018. [ bib | DOI ]
Keywords: floating point, ieee754
[32]
Jason Riedy. Streaming graph analysis: New models, new architectures. In ACM International Conference on Computing Frontiers, May 2018. Invited talk. [ bib | http ]
Keywords: hpda, graph analysis, streaming data, memory-centric, novel architectures
[33]
Eric Hein, Tom Conte, Jeffrey S. Young, Srinivas Eswar, Jiajia Li, Patrick Lavin, Richard Vuduc, and Jason Riedy. An initial characterization of the Emu Chick. In The Eighth International Workshop on Accelerators and Hybrid Exascale Systems (AsHES), pages 579–588, May 2018. [ bib | DOI ]
The Emu Chick is a prototype system designed around the concept of migratory memory-side processing. Rather than transferring large amounts of data across power-hungry, high-latency interconnects, the Emu Chick moves lightweight thread contexts to near-memory cores before the beginning of each memory read. The current prototype hardware uses FPGAs to implement cache-less "Gossamer" cores for doing computational work and a stationary core to run basic operating system functions and migrate threads between nodes. In this initial characterization of the Emu Chick, we study the memory bandwidth characteristics of the system through benchmarks like STREAM, pointer chasing, and sparse matrix vector multiply. We compare the Emu Chick hardware to architectural simulation and Intel Xeon-based platforms. While it is difficult to accurately compare prototype hardware with existing systems, our initial evaluation demonstrates that the Emu Chick uses available memory bandwidth more efficiently than a more traditional, cache-based architecture. Moreover, the Emu Chick provides stable, predictable performance with 80% bandwidth utilization on a random-access pointer chasing benchmark with weak locality.
Keywords: Instruction sets;Bandwidth;Computer architecture;Benchmark testing;Hardware;Prototypes;Kernel;benchmarking;streaming graphs;computer architecture;sparse tensors;emu
[34]
Chunxing Yin, Jason Riedy, and David A. Bader. A new algorithmic model for graph analysis of streaming data. In Proceedings of the 14th International Workshop on Mining and Learning with Graphs (MLG), May 2018. [ bib | .pdf ]
Keywords: hpda, graph analysis, streaming data
[35]
Jason Riedy. Graph analysis: New algorithm models, new architectures. SIAM Parallel Processing for Scientific Computing, March 2018. Minisymposium organizer with Oded Green and David A. Bader. [ bib ]
Keywords: hpda, graph analysis, streaming data, memory-centric, novel architectures
[36]
Will Powell, E. Jason Riedy, Jeffrey S. Young, and Thomas M. Conte. Wrangling rogues: Managing experimental post-moore architectures. CoRR, abs/1808.06334, 2018. [ bib | http ]
The Rogues Gallery is a new experimental testbed that is focused on tackling "rogue" architectures for the Post-Moore era of computing. While some of these devices have roots in the embedded and high-performance computing spaces, managing current and emerging technologies provides a challenge for system administration that are not always foreseen in traditional data center environments. We present an overview of the motivations and design of the initial Rogues Gallery testbed and cover some of the unique challenges that we have seen and foresee with upcoming hardware prototypes for future post-Moore research. Specifically, we cover the networking, identity management, scheduling of resources, and tools and sensor access aspects of the Rogues Gallery and techniques we have developed to manage these new platforms.
[37]
Jeffrey Young, Eric R. Hein, Srinivas Eswar, Patrick Lavin, Jiajia Li, E. Jason Riedy, Richard W. Vuduc, and Tom Conte. A microbenchmark characterization of the Emu Chick. CoRR, abs/1809.07696, 2018. [ bib | http ]
[38]
Patrick Lavin, E. Jason Riedy, Rich Vuduc, and Jeffrey Young. Spatter: A benchmark suite for evaluating sparse access patterns. CoRR, abs/1811.03743, 2018. [ bib | http ]
Recent characterizations of data movement performance have evaluated optimizations for dense and blocked accesses used by accelerators like GPUs and Xeon Phi, but sparse access patterns like scatter and gather are still not well understood across current and emerging architectures. We propose a tunable benchmark suite, Spatter, that allows users to characterize scatter, gather, and related sparse access patterns at a low level across multiple backends, including CUDA, OpenCL, and OpenMP. Spatter also allows users to vary the block size and amount of data that is moved to create a more comprehensive picture of sparse access patterns and to model patterns that are found in real applications. With Spatter we aim to characterize the performance of memory systems in a novel way by evaluating how the density of accesses compares against real-world effective memory bandwidths (measured by STREAM) and how it can be compared across widely varying architectures including GPUs and x86, ARM, and Power CPUs. We demonstrate how Spatter can be used to generate analysis plots comparing different architectures and show that current GPU systems achieve up to 65% of STREAM bandwidth for sparse accesses and are more energy efficient in doing so for several different sparsity patterns. Our future plans for the spatter benchmark are to use these results to predict the impact of new memory access primitives on various architectures, develop backends for novel hardware like FPGAs and the Emu Chick, and automate testing so that users can perform their own sparse access studies.
[39]
James Demmel, Mark Gates, Greg Henry, Xiaoye S. Li, Jason Riedy, and P.T. Peter Tang. A proposal for a next-generation BLAS. (living document, being updated), November 2017. [ bib | http ]
Keywords: lapack, blas, linear algebra
[40]
E. Jason Riedy, Greg Henry, James Demmel, Mark Gates, Xiaoye S. Li, and Ping Tak P. Tang. A proposal for a next-generation BLAS. Batched, Reproducible, and Reduced Precision BLAS Birds-of-a-Feather at the International Conference for High Performance Computing, Networking, Storage and Analysis, November 2017. [ bib | .pdf ]
Keywords: linear algebra, blas
[41]
Eisha Nathan, Anita Zakrzewska, Chunxing Yin, and Jason Riedy. A new direction for streaming graph analysis. IEEE Cluster, September 2017. [ bib ]
Applications in computer network security, social media analysis, and other areas rely on analyzing a changing environment. The data is rich in relationships and lends itself to graph analysis. Traditional static graph analysis cannot keep pace with network security applications analyzing nearly one million events per second and social networks like Facebook collecting 500 thousand comments per second. Streaming frameworks like STINGER support ingesting up three million of edge changes per second but there are few streaming analysis kernels that keep up with these rates. Here we introduce a new, non-stop model and use it to decouple the analysis from the data ingest.
Keywords: hpda, graph analysis, streaming data, memory-centric, novel architectures
[42]
Eisha Nathan, Anita Zakrzewska, Jason Riedy, and David A. Bader. Local community detection in dynamic graphs using personalized centrality. Algorithms, 10(3), August 2017. [ bib | DOI ]
Analyzing massive graphs poses challenges due to the vast amount of data available. Extracting smaller relevant subgraphs allows for further visualization and analysis that would otherwise be too computationally intensive. Furthermore, many real data sets are constantly changing, and require algorithms to update as the graph evolves. This work addresses the topic of local community detection, or seed set expansion, using personalized centrality measures, specifically PageRank and Katz centrality. We present a method to efficiently update local communities in dynamic graphs. By updating the personalized ranking vectors, we can incrementally update the corresponding local community. Applying our methods on real-world graphs, we are able to obtain speedups of up to 60× compared to static recomputation while maintaining an average recall of 0.94 of the highly ranked vertices returned. Next, we investigate how approximations of a centrality vector affect the resulting local community. Specifically, our method that guarantees that the vertices returned in the community are the highly ranked vertices from a personalized centrality metric.
[43]
E. Jason Riedy, Chunxing Yin, and David A. Bader. A new algorithm model for massive-scale streaming graph analysis. In SIAM Workshop on Network Science, Pittsburgh, PA, July 2017. [ bib | http ]
Keywords: hpda, graph analysis, streaming data
[44]
Jason Riedy. High-performance analysis of streaming graphs. HPC Analytic Workshop, June 2017. [ bib | http ]
Graph-structured data in social networks, finance, network security, and others not only are massive but also under continual change. These changes often are scattered across the graph. Stopping the world to run a single, static query is infeasible. Repeating complex global analyses on massive snapshots to capture only what has changed is inefficient. We discuss requirements for single-shot queries on changing graphs as well as recent high-performance algorithms that update rather than recompute results. These algorithms are incorporated into our software framework for streaming graph analysis, STINGER.
Keywords: hpda, graph analysis, streaming data, memory-centric, novel architectures
[45]
E. Jason Riedy. High-performance analysis of streaming graphs. SIAM Conference on Computational Science and Engineering, March 2017. Minisymposium organizer with Henning Meyerhenke. [ bib | http ]
Graph-structured data in social networks, finance, network security, and others not only are massive but also under continual change. These changes often are scattered across the graph. Stopping the world to run a single, static query is infeasible. Repeating complex global analyses on massive snapshots to capture only what has changed is inefficient. We discuss requirements for single-shot queries on changing graphs as well as recent high-performance algorithms that update rather than recompute results. These algorithms are incorporated into our software framework for streaming graph analysis, STING (Spatio-Temporal Interaction Networks and Graphs).
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[46]
James Demmel, Greg Henry, Xiaoye Li, Jason Riedy, and Peter Tang. A proposal for a next-generation BLAS. Workshop on Batched, Reproducible, and Reduced Precision BLAS, February 2017. [ bib | .pdf ]
Keywords: linear algebra, blas
[47]
Lawrence B. Holder, Rajmonda Caceres, David F. Gleich, Jason Riedy, Maleq Khan, Nitesh V. Chawla, Ravi Kumar, Yinghui Wu, Christine Klymko, Tina Eliassi-Rad, and Aditya Prakash. Current and future challenges in mining large networks: Report on the second sdm workshop on mining networks and graphs. SIGKDD Explorations Newsletter, 18(1):39–45, August 2016. [ bib | DOI ]
Keywords: Network mining, big data, challenges, graph mining
[48]
Marat Dukhan, Richard Vuduc, and Jason Riedy. Wanted: Floating-point add round-off error instruction. In The 2nd International Workshop on Performance Modeling: Methods and Applications (PMMA16), Frankfurt, Germany, June 2016. (Workshop with ISC High Performance). [ bib | arXiv | .pdf ]
We propose a new instruction (FPADDRE) that computes the round-off error in floating-point addition. We explain how this instruction benefits high-precision arithmetic operations in applications where double precision is not sufficient. Performance estimates on Intel Haswell, Intel Skylake, and AMD Steamroller processors, as well as Intel Knights Corner co-processor, demonstrate that such an instruction would improve the latency of double-double addition by up to 55% and increase double-double addition throughput by up to 103%, with smaller, but non-negligible benefits for double-double multiplication. The new instruction delivers up to 2x speedups on three benchmarks that use high-precision floating-point arithmetic: double-double matrix-matrix multiplication, compensated dot product, and polynomial evaluation via the compensated Horner scheme.
Keywords: floating point, ieee754
[49]
Jason Riedy. Updating PageRank for streaming graphs. In Graph Algorithms Building Blocks (GABB 2016), Chicago, IL, May 2016. (Workshop with IPDPS 2016). [ bib | .pdf ]
Incremental graph algorithms can respond quickly to small changes in massive graphs by updating rather than recomputing analysis metrics. Here we use the linear system formulation of PageRank and ideas from iterative refinement to compute the update to a PageRank vector accurately and quickly. The core idea is to express the residual of the original solution with respect to the updated matrix representing the graph. The update to the residual is sparse. Solving for the solution update with a straight-forward iterative method spreads the change outward from the change locations but converges before traversing the entire graph. We achieve speed-ups of 2× to over 40× relative to a restarted, highly parallel PageRank iteration for small, low-latency batches of edge insertions. These cases traverse 2× to nearly 10000× fewer edges than the restarted PageRank iteration. This provides an interesting test case for the ongoing GraphBLAS effort: Can the APIs support our incremental algorithms cleanly and efficiently?
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[50]
E. Jason Riedy and David A. Bader. Scalable network analysis: Tools, algorithms, applications. SIAM Parallel Processing for Scientific Computing, April 2016. Minisymposium organizer with Henning Meyerhenke and David A. Bader. [ bib | http ]
Graph analysis provides tools for analyzing the irregular data sets common in health informatics, computational biology, climate science, sociology, security, finance, and many other fields. These graphs possess different structures than typical finite element meshes. Scaling graph analysis to the scales of data being gathered and created has spawned many directions of exciting new research. This minisymposium includes talks on massive graph generation for testing and evaluating parallel algorithms, novel streaming techniques, and parallel graph algorithms for new and existing problems. It also covers existing parallel frameworks and interdisciplinary applications, e.g. the analysis of climate networks.
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[51]
David Bader, Aleksandra Michalewicz, Oded Green, Jessie Birkett-Rees, Jason Riedy, James Fairbanks, and Anita Zakrzewska. Semantic database applications at the Samtavro Cemetery, Georgia. In The 44th Computer Applications and Quantitative Methods in Archaeology Conference (CAA), Oslo, Norway, March 2016. [ bib ]
In 2013 a paper was offered to the CAA concerning archaeological legacy data and semantic database applications, with some preliminary results for a study conducted into the Samtavro cemetery, situated in the South Caucasus in the modern republic of Georgia. The present paper presents further research outcomes of data mining the Samtavro material. Over four thousand graves were excavated at this site, used most intensively during the Late Bronze and Iron Ages, and later in the Roman and Late Antique periods. The current project focuses on the latter period—and the legacy of Soviet and post-Soviet excavations—in a collaborative effort between computer scientists based at the Georgia Institute of Technology, USA, and archaeologists at the University of Melbourne and Monash University, Australia. Data for 1075 tombs, 1249 individuals, and 5842 grave accoutrements were collected across 74 data fields, resulting in the identification of 9 tomb types, 37 artefact types and 320 artefact subtypes. Methods tested against the Samtavro material culture included the application of clustering techniques to understand associations of related items based on patterns of co-occurrence, using traditional data mining (hierarchical link clustering) and spectral graph theory—focusing on tomb types in relation to artefact types. The other method calculated the probability of each event occurring and comparing this to what we would expect if these were truly random—focusing on artefact types in relation to biological sex and age brackets. In some instances, our work confirmed previously established relationships, but it likewise revealed new results concerning particular entities. The project demonstrates that although sites for which comprehensive archival records exist can benefit from these types of approaches, often the greatest limitation in taking a ‘big data’ approach is the relative scarcity of archaeological data.
Keywords: graph analysis, archaeology
[52]
Marat Dukhan, Richard W. Vuduc, and E. Jason Riedy. Wanted: Floating-point add round-off error instruction. CoRR, abs/1603.00491, 2016. [ bib | arXiv | http ]
[53]
E. Jason Riedy. Graph analysis beyond linear algebra. Development of Modern Methods for Linear Algebra, October 2015. Invited presentation. [ bib | .pdf | http ]
High-performance graph analysis is unlocking knowledge in computer security, bioinformatics, social networks, and many other data integration areas. Graphs provide a convenient abstraction for many data problems beyond linear algebra. Some problems map directly to linear algebra. Others, like community detection, look eerily similar to sparse linear algebra techniques. And then there are algorithms that strongly resist attempts at making them look like linear algebra. This talk will cover recent results with an emphasis on streaming graph problems where the graph changes and results need updated with minimal latency. We’ll also touch on issues of sensitivity and reliability where graph analysis needs to learn from numerical analysis and linear algebra.
Keywords: lapack, blas, linear algebra, graph analysis, streaming data
[54]
Adam McLaughlin, Jason Riedy, and David A. Bader. An energy-efficient abstraction for simultaneous breadth-first searches. In The IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, September 2015. [ bib | .pdf ]
Optimized GPU kernels are sufficiently complicated to write that they often are specialized to specific input data, target architectures, or applications. This paper presents a multi-search abstraction for computing multiple breadth-first searches in parallel and demonstrates a high-performance, general implementation. Our abstraction removes the burden of orchestrating graph traversal from the user while providing high performance and low energy usage, an often overlooked component of algorithm design. Energy consumption has become a first-class hardware design constraint for both massive and embedded computing platforms. Our abstraction can be applied to such problems as the all-pairs shortest-path problem, community detection, reachability querying, and others. To map graph traversal efficiently to NVIDIA GPUs, our hybrid implementation chooses between processing active vertices with a single thread or an entire warp based on vertex outdegree. For a set of twelve varied graphs, the implementation of our abstraction saves 42% time and 62% energy on average compared to representative implementations of specific applications from existing literature.
Keywords: hpda, graph analysis, parallel algorithm
[55]
Jason Riedy. Network challenge: Error and sensitivity analysis. SDM-Networks 2015: The Second SDM Workshop on Mining Networks and Graphs: A Big Data Analytic Challenge, May 2015. Invited panelist. [ bib | .pdf | http ]
Keywords: graph analysis, sensitivity
[56]
Jason Riedy and David A. Bader. Graph analysis trends and opportunities. In CMG Performance and Capacity, Atlanta, GA, November 2014. Invited presentation. [ bib | .pdf | http ]
High-performance graph analysis is unlocking knowledge in problems like anomaly detection in computer security, community structure in social networks, and many other data integration areas. While graphs provide a convenient abstraction, real-world problems' sparsity and lack of locality challenge current systems. This talk will cover current trends ranging from massive scales to low-power, low-latency systems and summarize opportunities and directions for graphs and computing systems.
Keywords: graph analysis, streaming data, high performance data analysis, parallel algorithm
[57]
Adam McLaughlin, Jason Riedy, and David A. Bader. Optimizing energy consumption and parallel performance for betweenness centrality using GPUs. In The IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, September 2014. “Rising Stars” section. [ bib | DOI | .pdf ]
Applications of high-performance graph analysis range from computational biology to network security and even transportation. These applications often consider graphs under rapid change and are moving beyond HPC platforms into energy-constrained embedded systems. This paper optimizes one successful and demanding analysis kernel, betweenness centrality, for NVIDIA GPU accelerators in both environments. Our algorithm for static analysis is capable of exceeding 2 million traversed edges per second per watt (MTEPS/W). Optimizing the parallel algorithm and treating the dynamic problem directly achieves a 6.39× average speed-up and 84% average reduction in energy consumption.
Keywords: hpda, graph analysis, parallel algorithm
[58]
Jason Riedy. STINGER: Analyzing massive, streaming graphs. 3rd GraphLab Workshop, July 2014. [ bib | .pdf ]
Keywords: hpda, graph analysis, streaming data
[59]
Jason Riedy and David A. Bader. STINGER: Multi-threaded graph streaming. In Graph Algorithms Building Blocks (GABB 2014), Phoeniz, AZ, May 2014. Invited presentation and panelist. (Workshop with IPDPS 2014). [ bib | .pdf | http ]
Keywords: graph analysis, streaming data, high performance data analysis, parallel algorithm
[60]
Jason Riedy, David A. Bader, David Ediger, Rob McColl, and Timothy G. Mattson. STING: Spatio-temporal interaction networks and graphs for Intel platforms. Presentation at Intel Corporation, Santa Clara, CA, January 2014. [ bib | .pdf | http ]
Keywords: hpda, graph analysis, streaming data
[61]
Shel Swenson, Yogesh Simmhan, Viktor Prasanna, Manish Parashar, Jason Riedy, David Bader, and Richard Vuduc. Sustainable software development for next-gen sequencing (ngs) bioinformatics on emerging platforms. In First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1), Denver, CO, November 2013. held in conjunction with SC13, published electronically (http://wssspe.researchcomputing.org.uk/). [ bib | .pdf | http ]
DNA sequence analysis is fundamental to life science research. The rapid development of next generation sequencing (NGS) technologies, and the richness and diversity of applications it makes feasible, have created an enormous gulf between the potential of this technology and the development of computational methods to realize this potential. Bridging this gap holds possibilities for broad impacts toward multiple grand challenges and offers unprecedented opportunities for software innovation and research. We argue that NGS-enabled applications need a critical mass of sustainable software to benefit from emerging computing platforms' transformative potential. Accumulating the necessary critical mass will require leaders in computational biology, bioinformatics, computer science, and computer engineering work together to identify core opportunity areas, critical software infrastructure, and software sustainability challenges. Furthermore, due to the quickly changing nature of both bioinformatics software and accelerator technology, we conclude that creating sustainable accelerated bioinformatics software means constructing a sustainable bridge between the two fields. In particular, sustained collaboration between domain developers and technology experts is needed to develop the accelerated kernels, libraries, frameworks and middleware that could provide the needed flexible link from NGS bioinformatics applications to emerging platforms.
Keywords: high performance data analysis, accelerator, parallel algorithm
[62]
Shel Swenson, Yogesh Simmhan, Viktor Prasanna, Manish Parashar, David Bader, Jason Riedy, and Richard Vuduc. Report on “workshop on challenges in accelerating next-gen sequencing (NGS) bioinformatics”. in conjunction with ACM-BCB 2013, September 2013. [ bib | http ]
Keywords: high performance data analysis, accelerator, parallel algorithm
[63]
David Ediger, Karl Jiang, Jason Riedy, and David A. Bader. GraphCT: Multithreaded algorithms for massive graph analysis. IEEE Transactions in Parallel and Distributed Systems, pages 2220 – 2229, September 2013. [ bib | DOI | .pdf | http ]
The digital world has given rise to massive quantities of data that include rich semantic and complex networks. A social graph, for example, containing hundreds of millions of actors and tens of billions of relationships is not uncommon. Analyzing these large data sets, even to answer simple analytic queries, often pushes the limits of algorithms and machine architectures. We present GraphCT, a scalable framework for graph analysis using parallel and multithreaded algorithms on shared memory platforms. Utilizing the unique characteristics of the Cray XMT, GraphCT enables fast network analysis at unprecedented scales on a variety of input data sets. On a synthetic power law graph with 2 billion vertices and 17 billion edges, we can find the connected components in 2 minutes. We can estimate the betweenness centrality of a similar graph with 537 million vertices and over 8 billion edges in under 1 hour. GraphCT is built for portability and performance.
[64]
Jason Riedy. STINGER: Analyzing massive, streaming graphs. 2nd GraphLab Workshop, July 2013. [ bib | .pdf ]
Keywords: hpda, graph analysis, streaming data
[65]
David Ediger, Jason Riedy, David A. Bader, and Henning Meyerhenke. Computational graph analytics for massive streaming data. In Hamid Sarbazi-azad and Albert Zomaya, editors, Large Scale Network-Centric Computing Systems, Parallel and Distributed Computing, chapter 25. Wiley, July 2013. [ bib | DOI | .pdf ]
Handling the constant stream of data from health care, security, business, and social network applications requires new algorithms and data structures. We present a new approach for parallel massive analysis of streaming, temporal, graph-structured data. For this purpose we examine data structure and algorithm trade-offs that extract the parallelism necessary for high-performance updating analysis of massive graphs. As a result of this study, we propose the extensible and flexible data structure for massive graphs called STINGER (Spatio-Temporal Interaction Networks and Graphs Extensible Representation). Two case studies demonstrate our new approach's effectiveness. The first one computes a dynamic graph's vertices' clustering coefficients. We show that incremental updates are far more efficient than global recomputation. Within this kernel, we compare three methods for dynamically updating local clustering coefficients: a brute-force local recalculation, a sorting algorithm, and our new approximation method using a Bloom filter. On 32 processors of a with a synthetic scale-free graph of 224 ≈16 million vertices and 229 ≈537 million edges, the brute-force method processes a mean of over 50000 updates per second, while our Bloom filter approaches 200000 updates per second. The second case study monitors a global feature, a dynamic graph's connected components. We use similar algorithmic ideas as before to exploit the parallelism in the problem and provided by the hardware architecture. On a 16 million vertex graph, we obtain rates of up to 240000 updates per second on 32 processors of a . For the large scale-free graphs typical in our applications, our implementation uses novel batching techniques that exploit the scale-free nature of the data and run over three times faster than prior methods. Our new framework is the first to handle real-world data rates, opening the door to higher-level analytics such as community and anomaly detection.
Keywords: parallel algorithm, hpda, graph analysis, streaming data
[66]
Shel Swenson, Yogesh Simmhan, Viktor Prasanna, Manish Parashar, David Bader, Jason Riedy, and Richard Vuduc. Report on “workshop on accelerating bioinformatics applications enabled by nextgen-sequencing”. Co-located with IPDPS 2013, May 2013. [ bib | http ]
Keywords: high performance data analysis, accelerator, parallel algorithm
[67]
E. Jason Riedy and David A. Bader. Multithreaded community monitoring for massive streaming graph data. In 7th Workshop on Multithreaded Architectures and Applications (MTAAP), Boston, MA, May 2013. [ bib | DOI | .pdf ]
Analyzing static snapshots of massive, graph-structured data cannot keep pace with the growth of social networks, financial transactions, and other valuable data sources. Current state-of-the-art industrial methods analyze these streaming sources using only simple, aggregate metrics. There are few existing scalable algorithms for monitoring complex global quantities like decomposition into community structure. Using our framework STING, we present the first known parallel algorithm specifically for monitoring communities in this massive, streaming, graph-structured data. Our algorithm performs incremental re-agglomeration rather than starting from scratch after each batch of changes, reducing the problem's size to that of the change rather than the entire graph. We analyze our initial implementation's performance on multithreaded platforms for execution time and latency. On an Intel-based multithreaded platform, our algorithm handles up to 100 million updates per second on social networks with one to 30 million edges, providing a speed-up from 4× to 3700× over statically recomputing the decomposition after each batch of changes. Possibly because of our artificial graph generator, resulting communities' modularity varies little from the initial graph.
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[68]
Jason Riedy and David A. Bader. Massive streaming data analytics: A graph-based approach. XRDS: Crossroads, The ACM Magazine for Students — Scientific Computing, 19(3):37–43, March 2013. [ bib | DOI | .pdf ]
Analyzing massive streaming graphs efficiently requires new algorithms, data structures, and computing platforms.
Keywords: graph analysis, high performance data analysis, streaming data
[69]
Robert C. McColl, David Ediger, David A. Bader, and Jason Riedy. Analyzing graph structure in streaming data with STINGER. SIAM Conference on Computational Science and Engineering, February 2013. [ bib | .pdf ]
Analyzing static snapshots of massive, graph-structured data cannot keep pace with the growth of social networks, financial transactions, and other valuable data sources. Our software framework, STING (Spatio-Temporal Interaction Networks and Graphs), uses a scalable, high-performance graph data structure to enable these applications. STING supports fast insertions, deletions, and updates on graphs with semantic information and skewed degree distributions. STING achieves large speed-ups over parallel, static recomputation on both common multicore and specialized multithreaded platforms.
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[70]
David A. Bader, Henning Meyerhenke, and Jason Riedy. Applications and challenges in large-scale graph analysis. SIAM Conference on Computational Science and Engineering, February 2013. [ bib | .pdf ]
Emerging real-world graph problems include detecting community structure in large social networks, improving the resilience of the electric power grid, and detecting and preventing disease in human populations. We discuss the opportunities and challenges in massive data-intensive computing for applications in social network analysis, genomics, and security. The explosion of real-world graph data poses substantial challenges for software, hardware, algorithms, and application experts.
Keywords: hpda, graph analysis, streaming data
[71]
Shel Swenson, Yogesh Simmhan, Viktor K. Prasanna, Manish Parashar, E. Jason Riedy, David A. Bader, and Richard W. Vuduc. Sustainable software development for next-gen sequencing (NGS) bioinformatics on emerging platforms. CoRR, abs/1309.1828, 2013. [ bib | http ]
[72]
Lauren L. Smith and Dolores A. Shaffer. DARPA's High Productivity Computing Systems program: A final report. Supercomputing Birds-of-a-Feather session, November 2012. Invited panel speaker. [ bib ]
The DARPA High Productivity Computing Systems (HPCS) program has been focused on providing a new generation of economically viable high productivity computing systems for national security, scientific, industrial and commercial applications. This program was unique because it focused on system productivity that was defined to include enhancing performance, programmability, portability, usability, manageability and robustness of systems as opposed to just being focused on one execution time performance metric. The BOF is for anyone interested in learning about the two HPCS systems and how productivity in High Performance Computing has been enhanced.
Keywords: hpda, graph analysis, streaming data, novel architectures
[73]
David Ediger, Robert McColl, Jason Riedy, and David A. Bader. STINGER: High performance data structure for streaming graphs. In The IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, September 2012. Best paper award. [ bib | DOI | .pdf ]
The current research focus on “big data” problems highlights the scale and complexity of analytics required and the high rate at which data may be changing. In this paper, we present our high performance, scalable and portable software, Spatio-Temporal Interaction Networks and Graphs Extensible Representation (STINGER), that includes a graph data structure that enables these applications. Key attributes of STINGER are fast insertions, deletions, and updates on semantic graphs with skewed degree distributions. We demonstrate a process of algorithmic and architectural optimizations that enable high performance on the Cray XMT family and Intel multicore servers. Our implementation of STINGER on the Cray XMT processes over 3 million updates per second on a scale-free graph with 537 million edges.
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[74]
David A. Bader, David Ediger, and Jason Riedy. Streaming graph analytics for massive graphs. SIAM Annual Meeting, July 2012. [ bib | .pdf | http ]
Emerging real-world graph problems include detecting community structure in large social networks, improving the resilience of the electric power grid, and detecting and preventing disease in human populations. The volume and richness of data combined with its rate of change renders monitoring properties at scale by static recomputation infeasible. We approach these problems with massive, fine-grained parallelism across different shared memory architectures both to compute solutions and to explore the sensitivity of these solutions to natural bias and omissions within the data.
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[75]
Jason Riedy, David A. Bader, David Ediger, Rob McColl, and Timothy G. Mattson. STING: Spatio-temporal interaction networks and graphs for Intel platforms. Presentation at Intel Corporation, Santa Clara, CA, July 2012. [ bib | .pdf | http ]
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[76]
E. Jason Riedy, David A. Bader, and Henning Meyerhenke. Scalable multi-threaded community detection in social networks. In 6th Workshop on Multithreaded Architectures and Applications (MTAAP), May 2012. [ bib | DOI | .pdf ]
The volume of existing graph-structured data requires improved parallel tools and algorithms. Finding communities, smaller subgraphs densely connected within the subgraph than to the rest of the graph, plays a role both in developing new parallel algorithms as well as opening smaller portions of the data to current analysis tools. We improve performance of our parallel community detection algorithm by 20% on the massively multithreaded Cray XMT, evaluate its performance on the next-generation Cray XMT2, and extend its reach to Intel-based platforms with OpenMP. To our knowledge, not only is this the first massively parallel community detection algorithm but also the only such algorithm that achieves excellent performance and good parallel scalability across all these platforms. Our implementation analyzes a moderate sized graph with 105 million vertices and 3.3 billion edges in around 500 seconds on a four processor, 80-logical-core Intel-based system and 1100 seconds on a 64-processor Cray XMT2.
Keywords: hpda, graph analysis, parallel algorithm
[77]
Jason Riedy, Henning Meyerhenke, David A. Bader, David Ediger, and Timothy G. Mattson. Analysis of streaming social networks and graphs on multicore architectures. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Kyoto, Japan, March 2012. [ bib | DOI | .pdf | http ]
Analyzing static snapshots of massive, graph-structured data cannot keep pace with the growth of social networks, financial transactions, and other valuable data sources. We introduce a framework, STING (Spatio-Temporal Interaction Networks and Graphs), and evaluate its performance on multicore, multisocket Intel(R)-based platforms. STING achieves rates of around 100000 edge updates per second on large, dynamic graphs with a single, general data structure. We achieve speed-ups of up to 1000× over parallel static computation, improve monitoring a dynamic graph's connected components, and show an exact algorithm for maintaining local clustering coefficients performs better on Intel-based platforms than our earlier approximate algorithm.
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[78]
E. Jason Riedy, David Ediger, Henning Meyerhenke, and David A. Bader. STING: Software for analysis of spatio-temporal interaction networks and graphs. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib | .pdf ]
Current tools for analyzing graph-structured data and semantic networks focus on static graphs. Our STING package tackles analysis of streaming graphs like today's social networks and communication tools. STING maintains a massive graph under changes while coordinating analysis kernels to achieve analysis at real-world data rates. We show examples of local metrics like clustering coefficients and global metrics like connected components and agglomerative clustering. STING supports parallel Intel architectures as well as the Cray XMT.
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[79]
David Ediger, E. Jason Riedy, Henning Meyerhenke, and David A. Bader. Analyzing massive networks with graphct. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib ]
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[80]
Henning Meyerhenke, E. Jason Riedy, and David A. Bader. Parallel community detection in streaming graphs. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib ]
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[81]
E. Jason Riedy and Henning Meyerhenke. Scalable algorithms for analysis of massive, streaming graphs. SIAM Parallel Processing for Scientific Computing, February 2012. Minisymposium organizer with Henning Meyerhenke. [ bib | .pdf | http ]
Graph-structured data in social networks, finance, network security, and others not only are massive but also under continual change. These changes often are scattered across the graph. Repeating complex global analyses on massive snapshots to capture only what has changed is inefficient. We discuss analysis algorithms for streaming graph data that maintain both local and global metrics. We extract parallelism from both analysis kernel and graph data to scale performance to real-world sizes.
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[82]
David Ediger, Jason Riedy, Rob McColl, and David A. Bader. Parallel programming for graph analysis. In 17th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), New Orleans, LA, February 2012. [ bib | .html ]
An increasingly fast-paced, digital world has produced an ever-growing volume of petabyte-sized datasets. At the same time, terabytes of new, unstructured data arrive daily. As the desire to ask more detailed questions about these massive streams has grown, parallel software and hardware have only recently begun to enable complex analytics in this non-scientific space. In this tutorial, we will discuss the open problems facing us with analyzing this "data deluge". We will present algorithms and data structures capable of analyzing spatio-temporal data at massive scale on parallel systems. We will try to understand the difficulties and bottlenecks in parallel graph algorithm design on current systems and will show how multithreaded and hybrid systems can overcome these challenges. We will demonstrate how parallel graph algorithms can be implemented on a variety of architectures using different programming models. The goal of this tutorial is to provide a comprehensive introduction to the field of parallel graph analysis to an audience with computing background, interested in participating in research and/or commercial applications of this field. Moreover, we will cover leading-edge technical and algorithmic developments in the field and discuss open problems and potential solutions.
Keywords: graph analysis, high performance data analysis, streaming data
[83]
E. Jason Riedy, Henning Meyerhenke, David Ediger, and David A. Bader. Parallel community detection for massive graphs. In 10th DIMACS Implementation Challenge Workshop - Graph Partitioning and Graph Clustering. (workshop paper), Atlanta, Georgia, February 2012. Won first place in the Mix Challenge and Mix Pareto Challenge. [ bib | .pdf | .pdf ]
Tackling the current volume of graph-structured data requires parallel tools. We extend our work on analyzing such massive graph data with a massively parallel algorithm for community detection that scales to current data sizes, clustering a real-world graph of over 100 million vertices and over 3 billion edges in under 500 seconds on a four- processor Intel E7-8870-based server. Our algorithm achieves moderate parallel scalability without sacrificing sequential operational complexity. Community detection partitions a graph into subgraphs more densely connected within the subgraph than to the rest of the graph. We take an agglomerative approach similar to Clauset, Newman, and Moore’s sequential algorithm, merging pairs of connected intermediate subgraphs to optimize different graph properties. Working in parallel opens new approaches to high performance. We improve performance of our parallel community detection algorithm on both the Cray XMT2 and OpenMP platforms and adapt our algorithm to the DIMACS Implementation Challenge data set.
Keywords: hpda, graph analysis, parallel algorithm
[84]
E. Jason Riedy, Henning Meyerhenke, David Ediger, and David A. Bader. Parallel community detection for massive graphs. In David A. Bader, Henning Meyerhenke, Peter Sanders, and Dorothea Wagner, editors, Graph Partitioning and Graph Clustering, volume 588 of Contemporary Mathematics, pages 207–222. American Mathematical Society, 2012. [ bib | DOI | .pdf ]
Tackling the current volume of graph-structured data requires parallel tools. We extend our work on analyzing such massive graph data with a massively parallel algorithm for community detection that scales to current data sizes, clustering a real-world graph of over 100 million vertices and over 3 billion edges in under 500 seconds on a four-processor Intel E7-8870-based server. Our algorithm achieves moderate parallel scalability without sacrificing sequential operational complexity. Community detection partitions a graph into subgraphs more densely connected within the subgraph than to the rest of the graph. We take an agglomerative approach similar to Clauset, Newman, and Moore’s sequential algorithm, merging pairs of connected intermediate subgraphs to optimize different graph properties. Working in parallel opens new approaches to high performance. We improve performance of our parallel community detection algorithm on both the Cray XMT2 and OpenMP platforms and adapt our algorithm to the DIMACS Implementation Challenge data set.
Keywords: graph analysis, community detection, hpda, parallel algorithm
[85]
David A. Bader, David Ediger, and E. Jason Riedy. Parallel programming for graph analysis. In full day tutorial, Columbia, MD, September 2011. [ bib ]
Keywords: graph analysis, high performance data analysis, streaming data
[86]
E. Jason Riedy, Henning Meyerhenke, David Ediger, and David A. Bader. Parallel community detection for massive graphs. In 9th International Conference on Parallel Processing and Applied Mathematics (PPAM11). Springer, September 2011. [ bib | DOI | .pdf ]
Tackling the current volume of graph-structured data requires parallel tools. We extend our work on analyzing such massive graph data with the first massively parallel algorithm for community detection that scales to current data sizes, scaling to graphs of over 122 million vertices and nearly 2 billion edges in under 7300 seconds on a massively multithreaded Cray XMT. Our algorithm achieves moderate parallel scalability without sacrificing sequential operational complexity. Community detection partitions a graph into subgraphs more densely connected within the subgraph than to the rest of the graph. We take an agglomerative approach similar to Clauset, Newman, and Moore's sequential algorithm, merging pairs of connected intermediate subgraphs to optimize different graph properties. Working in parallel opens new approaches to high performance. On smaller data sets, we find the output's modularity compares well with the standard sequential algorithms.
Keywords: hpda, graph analysis, parallel algorithm
[87]
Jason Riedy, David A. Bader, Henning Meyerhenke, David Ediger, and Timothy Mattson. STING: Spatio-temporal interaction networks and graphs for Intel platforms. Presentation at Intel Corporation, Santa Clara, CA, August 2011. [ bib | .pdf | .pdf ]
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[88]
Jason Riedy, David Ediger, David A. Bader, and Henning Meyerhenke. Tracking structure of streaming social networks. 2011 Graph Exploitation Symposium hosted by MIT Lincoln Labs, August 2011. Invited presentation. [ bib | .pdf | .pdf ]
Keywords: hpda, graph analysis, streaming data
[89]
David Ediger, E. Jason Riedy, David A. Bader, and Henning Meyerhenke. Tracking structure of streaming social networks. In 5th Workshop on Multithreaded Architectures and Applications (MTAAP), May 2011. [ bib | DOI | .pdf ]
Current online social networks are massive and still growing. For example, Facebook has over 500 million active users sharing over 30 billion items per month. The scale within these data streams has outstripped traditional graph analysis methods. Monitoring requires dynamic analysis rather than repeated static analysis. The massive state behind multiple persistent queries requires shared data structures and not problem-specific representations. We present a framework based on the STINGER data structure that can monitor a global property, connected components, on a graph of 16 million vertices at rates of up to 240000 updates per second on a 32 processor Cray XMT. For very large scale-free graphs, our implementation uses novel batching techniques that exploit the scale-free nature of the data and run over three times faster than prior methods. Our framework handles, for the first time, real-world data rates, opening the door to higher-level analytics such as community and anomaly detection.
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[90]
Jason Riedy. The storm's coming when the chickens spread out. In Fiona Robyn and Kaspalita, editors, pay attention: a river of stones, page 77. lulu.com, March 2011. [ bib | .html ]
Keywords: poetry
[91]
Jason Riedy, David A. Bader, Karl Jiang, Pushkar Pande, and Richa Sharma. Detecting communities from given seeds in social networks. Technical Report GT-CSE-11-01, Georgia Institute of Technology, February 2011. [ bib | .pdf | http ]
Analyzing massive social networks challenges both high-performance computers and human understanding. These massive networks cannot be visualized easily, and their scale makes applying complex analysis methods computationally expensive. We present a region-growing method for finding a smaller, more tractable subgraph, a community, given a few example seed vertices. Unlike existing work, we focus on a small number of seed vertices, from two to a few dozen. We also present the first comparison between five algorithms for expanding a small seed set into a community. Our comparison applies these algorithms to an R-MAT generated graph component with 240 thousand vertices and 32 million edges and evaluates the community size, modularity, Kullback-Leibler divergence, conductance, and clustering coefficient. We find that our new algorithm with a local modularity maximizing heuristic based on Clauset, Newman, and Moore performs very well when the output is limited to 100 or 1000 vertices. When run without a vertex size limit, a heuristic from McCloskey and Bader generates communities containing around 60% of the graph's vertices and having a small conductance and modularity appropriate to the result size. A personalized PageRank algorithm based on Andersen, Lang, and Chung also performs well with respect to our metrics.
[92]
David A. Bader, David Ediger, and E. Jason Riedy. Parallel programming for graph analysis. In 16th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), San Antonio, TX, February 2011. [ bib | .html ]
An increasingly fast-paced, digital world has produced an ever-growing volume of petabyte-sized datasets. At the same time, terabytes of new, unstructured data arrive daily. As the desire to ask more detailed questions about these massive streams has grown, parallel software and hardware have only recently begun to enable complex analytics in this non-scientific space. In this tutorial, we will discuss the open problems facing us with analyzing this "data deluge". We will present algorithms and data structures capable of analyzing spatio-temporal data at massive scale on parallel systems. We will try to understand the difficulties and bottlenecks in parallel graph algorithm design on current systems and will show how multithreaded and hybrid systems can overcome these challenges. We will demonstrate how parallel graph algorithms can be implemented on a variety of architectures using different programming models. The goal of this tutorial is to provide a comprehensive introduction to the field of parallel graph analysis to an audience with computing background, interested in participating in research and/or commercial applications of this field. Moreover, we will cover leading-edge technical and algorithmic developments in the field and discuss open problems and potential solutions.
Keywords: graph analysis, high performance data analysis, streaming data
[93]
Participants. Report on NSF Workshop on Center Scale Activities Related to Accelerators for Data Intensive Applications. This workshop is supported by NSF Grant Number 1051537, in response to the Call for Exploratory Workshop Proposals for Scientific Software Innovation Institutes (S2I2)., October 2010. [ bib ]
Keywords: high performance data analysis, accelerator, parallel algorithm
[94]
David A. Bader, Jonathan Berry, Simon Kahan, Richard Murphy, E. Jason Riedy, and Jeremiah Willcock. Graph 500 benchmark 1 (“search”). Version 1.1, October 2010. [ bib | .html ]
Keywords: graph analysis, parallel algorithm, mistake
[95]
Jason Riedy, David Bader, and David Ediger. Applications in social networks. In NSF Workshop on Accelerators for Data-Intensive Applications, October 2010. [ bib | .pdf | .pdf ]
Keywords: hpda, parallel algorithm, graph analysis, streaming data
[96]
E. Jason Riedy. Here, on the farthest point of the peninsula. In Dana Martin Guthrie, editor, Read Write Poem NaPoWriMo Anthology, page 86. issuu.com, September 2010. [ bib | http ]
Keywords: poetry
[97]
David Ediger, Karl Jiang, E. Jason Riedy, David A. Bader, Courtney Corley, Rob Farber, and William N. Reynolds. Massive social network analysis: Mining twitter for social good. In 39th International Conference on Parallel Processing (ICPP), San Diego, CA, September 2010. [ bib | DOI | .pdf ]
Social networks produce an enormous quantity of data. Facebook consists of over 400 million active users sharing over 5 billion pieces of information each month. Analyzing this vast quantity of unstructured data presents challenges for software and hardware. We present GraphCT, a Graph Characterization Tooklit for massive graphs representing social network data. On a 128-processor Cray XMT, GraphCT estimates the betweenness centrality of an artificially generated (R-MAT) 537 million vertex, 8.6 billion edge graph in 55 minutes. We use GraphCT to analyze public data from Twitter, a microblogging network. Twitter's message connections appear primarily tree-structured as a news dissemination system. Within the public data, however, are clusters of conversations. Using GraphCT, we can rank actors within these conversations and help analysts focus attention on a much smaller data subset.
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[98]
David Ediger, Karl Jiang, E. Jason Riedy, and David A. Bader. Massive streaming data analytics: A case study with clustering coefficients. In 4th Workshop on Multithreaded Architectures and Applications (MTAAP), Atlanta, GA, April 2010. [ bib | DOI | .pdf ]
We present a new approach for parallel massive graph analysis of streaming, temporal data with a dynamic and extensible representation. Handling the constant stream of new data from health care, security, business, and social network applications requires new algorithms and data structures. We examine data structure and algorithm trade-offs that extract the parallelism necessary for high-performance updating analysis of massive graphs. Static analysis kernels often rely on storing input data in a specific structure. Maintaining these structures for each possible kernel with high data rates incurs a significant performance cost. A case study computing clustering coefficients on a general-purpose data structure demonstrates incremental updates can be more efficient than global recomputation. Within this kernel, we compare three methods for dynamically updating local clustering coefficients: a brute-force local recalculation, a sorting algorithm, and our new approximation method using a Bloom filter. On 32 processors of a Cray XMT with a synthetic scale-free graph of 224 ≈16 million vertices and 229 ≈537 million edges, the brute-force method processes a mean of over 50000 updates per second and our Bloom filter approaches 200000 updates per second.
Keywords: hpda, graph analysis, streaming data, parallel algorithm
[99]
E. Jason Riedy. Dependable direct solutions for linear systems using a little extra precision. CSE Seminar at Georgia Institute of Technology, August 2009. Invited presentation. [ bib | .pdf | http ]
Solving a square linear system Ax=b often is considered a black box. It's supposed to "just work," and failures often are blamed on the original data or subtleties of floating-point. Now that we have an abundance of cheap computations, however, we can do much better. A little extra precision in just the right places produces accurate solutions cheaply or demonstrates when problems are too hard to solve without significant cost. This talk will outline the method, iterative refinement with a new twist; the benefits, small backward and forward errors; and the trade-offs and unexpected benefits.
Keywords: linear algebra, sparse matrix, foating point, lapack
[100]
James W. Demmel, Mark Frederick Hoemmen, Yozo Hida, and E. Jason Riedy. Non-negative diagonals and high performance on low-profile matrices from Householder QR. SIAM Journal on Scientific Computing, 31(4):2832–2841, July 2009. [ bib | DOI | .pdf ]
The Householder reflections used in LAPACK's QR factorization leave positive and negative real entries along R's diagonal. This is sufficient for most applications of QR factorizations, but a few require that R have a nonnegative diagonal. This note describes a new Householder generation routine to produce a nonnegative diagonal. Additionally, we find that scanning for trailing zeros in the generated reflections leads to large performance improvements when applying reflections with many trailing zeros. Factoring low-profile matrices, those with nonzero entries mostly near the diagonal (e.g., band matrices), now require far fewer operations. For example, QR factorization of matrices with profile width b that are stored densely in an n×n matrix improves from O(n3) to O(n2+nb2). These routines are in LAPACK 3.2.
Keywords: LAPACK; QR factorization; Householder reflection; floating-point
[101]
James W. Demmel, Yozo Hida, Xiaoye S. Li, and E. Jason Riedy. Extra-precise iterative refinement for overdetermined least squares problems. ACM Transactions on Mathematical Software, 35(4):1–32, February 2009. [ bib | DOI | .pdf ]
We present the algorithm, error bounds, and numerical results for extra-precise iterative refinement applied to overdetermined linear least squares (LLS) problems. We apply our linear system refinement algorithm to Björck’s augmented linear system formulation of an LLS problem. Our algorithm reduces the forward normwise and componentwise errors to O(ɛ) unless the system is too ill conditioned. In contrast to linear systems, we provide two separate error bounds for the solution x and the residual r. The refinement algorithm requires only limited use of extra precision and adds only O(mn) work to the O(mn2) cost of QR factorization for problems of size m-by-n. The extra precision calculation is facilitated by the new extended-precision BLAS standard in a portable way, and the refinement algorithm will be included in a future release of LAPACK and can be extended to the other types of least squares problems.
Keywords: lapack, ieee754, floating point, linear algebra
[102]
E. Jason Riedy. Auctions for distributed (and possibly parallel) matchings. Visit to CERFACS courtesy of the Franco-Berkeley Fund, December 2008. Invited presentation. [ bib | .pdf | .pdf ]
Keywords: linear algebra, sparse matrix, foating point, lapack
[103]
IEEE 754 Committee. IEEE standard for floating-point arithmetic. IEEE Std 754-2008, Microprocessor Standards Committee of the IEEE Computer Society, New York, NY, August 2008. (committee member and contributor). [ bib | DOI ]
This standard specifies interchange and arithmetic formats and methods for binary and decimal floating-point arithmetic in computer programming environments. This standard specifies exception conditions and their default handling. An implementation of a floating-point system conforming to this standard may be realized entirely in software, entirely in hardware, or in any combination of software and hardware. For operations specified in the normative part of this standard, numerical results and exceptions are uniquely determined by the values of the input data, sequence of operations, and destination formats, all under user control.
Keywords: IEEE standards;floating point arithmetic;programming;IEEE standard;arithmetic formats;computer programming;decimal floating-point arithmetic;754-2008;NaN;arithmetic;binary;computer;decimal;exponent;floating-point;format;interchange;number;rounding;significand;subnormal
[104]
James W. Demmel, Mark Frederick Hoemmen, Yozo Hida, and E. Jason Riedy. Non-negative diagonals and high performance on low-profile matrices from Householder QR. LAPACK Working Note 203, Netlib, May 2008. Also issued as UCB/EECS-2008-76; modified from SISC version. [ bib | .pdf | .pdf ]
The Householder reflections used in LAPACK's QR factorization leave positive and negative real entries along R's diagonal. This is sufficient for most applications of QR factorizations, but a few require that R have a nonnegative diagonal. This note describes a new Householder generation routine to produce a nonnegative diagonal. Additionally, we find that scanning for trailing zeros in the generated reflections leads to large performance improvements when applying reflections with many trailing zeros. Factoring low-profile matrices, those with nonzero entries mostly near the diagonal (e.g., band matrices), now require far fewer operations. For example, QR factorization of matrices with profile width b that are stored densely in an n×n matrix improves from O(n3) to O(n2+nb2). These routines are in LAPACK 3.2.
[105]
James W. Demmel, Yozo Hida, Xiaoye S. Li, and E. Jason Riedy. Extra-precise iterative refinement for overdetermined least squares problems. LAPACK Working Note 188, Netlib, May 2007. Also issued as UCB/EECS-2007-77; version accepted for TOMS. [ bib | .pdf | .pdf ]
We present the algorithm, error bounds, and numerical results for extra-precise iterative refinement applied to overdetermined linear least squares (LLS) problems. We apply our linear system refinement algorithm to Björck’s augmented linear system formulation of an LLS problem. Our algorithm reduces the forward normwise and componentwise errors to O(ɛ) unless the system is too ill conditioned. In contrast to linear systems, we provide two separate error bounds for the solution x and the residual r. The refinement algorithm requires only limited use of extra precision and adds only O(mn) work to the O(mn2) cost of QR factorization for problems of size m-by-n. The extra precision calculation is facilitated by the new extended-precision BLAS standard in a portable way, and the refinement algorithm will be included in a future release of LAPACK and can be extended to the other types of least squares problems.
[106]
James W. Demmel, Yozo Hida, Xiaoye S. Li, E. Jason Riedy, Meghana Vishvanath, and David Vu. Precise solutions for overdetermined least squares problems. Stanford 50 – Eighth Bay Area Scientific Computing Day, March 2007. [ bib | .pdf ]
Linear least squares (LLS) fitting is the most widely used data modeling technique and is included in almost every data analysis system (e.g. spreadsheets). These software systems often give no feedback on the conditioning of the LLS problem or the floating-point calculation errors present in the solution. With limited use of extra precision, we can eliminate these concerns for all but the most ill-conditioned LLS problems. Our algorithm provides either a solution and residual with relatively tiny error or a notice that the LLS problem is too ill-conditioned.
Keywords: least squares, lapack, blas, linear algebra, floating point
[107]
James W. Demmel, Jack Dongarra, Beresford Parlett, W. Kahan, Ming Gu, David Bindel, Yozo Hida, Xiaoye S. Li, Osni A. Marques, E. Jason Riedy, Christof Vömel, Julien Langou, Piotr Luszczek, Jakub Kurzak, Alfredo Buttari, Julie Langou, and Stanimire Tomov. Prospectus for the next LAPACK and ScaLAPACK libraries. LAPACK Working Note 181, Netlib, February 2007. Also issued as UT-CS-07-592. [ bib | .pdf | .pdf ]
[108]
Osni A. Marques, E. Jason Riedy, and Christof Vömel. Benefits of IEEE-754 features in modern symmetric tridiagonal eigensolvers. SIAM Journal on Scientific Computing, 28(5):1613–1633, September 2006. [ bib | DOI | .pdf ]
Bisection is one of the most common methods used to compute the eigenvalues of symmetric tridiagonal matrices. Bisection relies on the Sturm count: For a given shift sigma, the number of negative pivots in the factorization T - σI = LDLT equals the number of eigenvalues of T that are smaller than sigma. In IEEE-754 arithmetic, the value ∞ permits the computation to continue past a zero pivot, producing a correct Sturm count when T is unreduced. Demmel and Li showed [IEEE Trans. Comput., 43 (1994), pp. 983–992] that using ∞ rather than testing for zero pivots within the loop could significantly improve performance on certain architectures. When eigenvalues are to be computed to high relative accuracy, it is often preferable to work with LDLT factorizations instead of the original tridiagonal T. One important example is the MRRR algorithm. When bisection is applied to the factored matrix, the Sturm count is computed from LDLT which makes differential stationary and progressive qds algorithms the methods of choice. While it seems trivial to replace T by LDLT, in reality these algorithms are more complicated: In IEEE-754 arithmetic, a zero pivot produces an overflow followed by an invalid exception (NaN, or “Not a Number”) that renders the Sturm count incorrect. We present alternative, safe formulations that are guaranteed to produce the correct result. Benchmarking these algorithms on a variety of platforms shows that the original formulation without tests is always faster provided that no exception occurs. The transforms see speed-ups of up to 2.6x over the careful formulations. Tests on industrial matrices show that encountering exceptions in practice is rare. This leads to the following design: First, compute the Sturm count by the fast but unsafe algorithm. Then, if an exception occurs, recompute the count by a safe, slower alternative. The new Sturm count algorithms improve the speed of bisection by up to 2x on our test matrices. Furthermore, unlike the traditional tiny-pivot substitution, proper use of IEEE-754 features provides a careful formulation that imposes no input range restrictions.
Keywords: lapack, ieee754, floating point, linear algebra
[109]
Jack Dongarra, Julien Langou, and E. Jason Riedy. Sca/LAPACK program style. August 2006. [ bib | .html ]
The purpose of this document is to facilitate contributions to LAPACK and ScaLAPACK by documenting their design and implementation guidelines. The long-term goal is to provide guidelines for both LAPACK and ScaLAPACK. However, the parallel ScaLAPACK code has more open issues, so this document primarily concerns LAPACK.
Keywords: linear algebra, lapack, blas
[110]
James W. Demmel, Jack Dongarra, Beresford Parlett, W. Kahan, Ming Gu, David Bindel, Yozo Hida, Xiaoye S. Li, Osni A. Marques, E. Jason Riedy, Christof Vömel, Julien Langou, Piotr Luszczek, Jakub Kurzak, Alfredo Buttari, Julie Langou, and Stanimire Tomov. Prospectus for the next LAPACK and ScaLAPACK libraries. In PARA'06: State-of-the-Art in Scientific and Parallel Computing, Umeå, Sweden, June 2006. High Performance Computing Center North (HPC2N) and the Department of Computing Science, Umeå University, Springer. [ bib | DOI | .pdf | .pdf ]
LAPACK and ScaLAPACK are widely used software libraries for numerical linear algebra. There have been over 68M web hits at www.netlib.org for the associated libraries LAPACK, ScaLAPACK, CLAPACK and LAPACK95. LAPACK and ScaLAPACK are used to solve leading edge science problems and they have been adopted by many vendors and software providers as the basis for their own libraries, including AMD, Apple (under Mac OS X), Cray, Fujitsu, HP, IBM, Intel, NEC, SGI, several Linux distributions (such as Debian), NAG, IMSL, the MathWorks (producers of MATLAB), Interactive Supercomputing, and PGI. Future improvements in these libraries will therefore have a large impact on users.
Keywords: lapack, linear algebra, floating point
[111]
James W. Demmel, Yozo Hida, W. Kahan, Xiaoye S. Li, Sonil Mukherjee, and E. Jason Riedy. Error bounds from extra-precise iterative refinement. ACM Transactions on Mathematical Software, 32(2):325–351, June 2006. [ bib | DOI | .pdf ]
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations where the residual is computed with extra precision. This algorithm was originally proposed in 1948 and analyzed in the 1960s as a means to compute very accurate solutions to all but the most ill-conditioned linear systems. However, two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard has essentially removed the first obstacle. To overcome the second obstacle, we show how the application of iterative refinement can be used to compute an error bound in any norm at small cost and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound.
Keywords: lapack, ieee754, floating point, linear algebra
[112]
E. Jason Riedy. Making static pivoting dependable. Seventh Bay Area Scientific Computing Day, March 2006. [ bib | .pdf | .pdf ]
For sparse LU factorization, dynamic pivoting tightly couples symbolic and numerical computation. Dynamic structural changes limit parallel scalability. Demmel and Li use static pivoting in distributed SuperLU for performance, but intentionally perturbing the input may lead silently to erroneous results. Are there experimentally stable static pivoting heuristics that lead to a dependable direct solver? The answer is currently a qualified yes. Current heuristics fail on a few systems, but all failures are detectable.
Keywords: sparse matrix, linear algebra, floating point, graph analysis
[113]
E. Jason Riedy, Yozo Hida, and James W. Demmel. The future of LAPACK and ScaLAPACK. Robert C. Thompson Matrix Meeting, November 2005. [ bib | .pdf | .pdf ]
We are planning new releases of the widely used LAPACK and ScaLAPACK numerical linear algebra libraries. Based on an on-going user survey (http://www.netlib.org/lapack-dev) and research by many people, we are proposing the following improvements: Faster algorithms (including better numerical methods, memory hierarchy optimizations, parallelism, and automatic performance tuning to accomodate new architectures), more accurate algorithms (including better numerical methods, and use of extra precision), expanded functionality (including updating and downdating, new eigenproblems, etc. and putting more of LAPACK into ScaLAPACK), and improved ease of use (friendlier interfaces in multiple languages). To accomplish these goals we are also relying on better software engineering techniques and contributions from collaborators at many institutions. This is joint work with Jack Dongarra.
Keywords: lapack, linear algebra, floating point
[114]
Osni A. Marques, E. Jason Riedy, and Christof Vömel. Benefits of IEEE-754 features in modern symmetric tridiagonal eigensolvers. LAPACK Working Note 172, Netlib, September 2005. Also issued as UCB//CSD-05-1414; expanded from SISC version. [ bib | .pdf | .pdf ]
Bisection is one of the most common methods used to compute the eigenvalues of symmetric tridiagonal matrices. Bisection relies on the Sturm count: For a given shift sigma, the number of negative pivots in the factorization T - σI = LDLT equals the number of eigenvalues of T that are smaller than sigma. In IEEE-754 arithmetic, the value ∞ permits the computation to continue past a zero pivot, producing a correct Sturm count when T is unreduced. Demmel and Li showed [IEEE Trans. Comput., 43 (1994), pp. 983–992] that using ∞ rather than testing for zero pivots within the loop could significantly improve performance on certain architectures. When eigenvalues are to be computed to high relative accuracy, it is often preferable to work with LDLT factorizations instead of the original tridiagonal T. One important example is the MRRR algorithm. When bisection is applied to the factored matrix, the Sturm count is computed from LDLT which makes differential stationary and progressive qds algorithms the methods of choice. While it seems trivial to replace T by LDLT, in reality these algorithms are more complicated: In IEEE-754 arithmetic, a zero pivot produces an overflow followed by an invalid exception (NaN, or “Not a Number”) that renders the Sturm count incorrect. We present alternative, safe formulations that are guaranteed to produce the correct result. Benchmarking these algorithms on a variety of platforms shows that the original formulation without tests is always faster provided that no exception occurs. The transforms see speed-ups of up to 2.6x over the careful formulations. Tests on industrial matrices show that encountering exceptions in practice is rare. This leads to the following design: First, compute the Sturm count by the fast but unsafe algorithm. Then, if an exception occurs, recompute the count by a safe, slower alternative. The new Sturm count algorithms improve the speed of bisection by up to 2x on our test matrices. Furthermore, unlike the traditional tiny-pivot substitution, proper use of IEEE-754 features provides a careful formulation that imposes no input range restrictions.
[115]
E. Jason Riedy. Modern language tools and 754R. ARITH'05, June 2005. Invited presentation and panelist. [ bib | .pdf | .pdf ]
Keywords: linear algebra, sparse matrix, foating point, lapack
[116]
David Hough, Bill Hay, Jeff Kidder, E. Jason Riedy, Guy L. Steele Jr., and Jim Thomas. Arithmetic interactions: From hardware to applications. In 17th IEEE Symposium on Computer Arithmetic (ARITH'05), June 2005. See related presentation. [ bib | DOI ]
The entire process of creating and executing applications that solve interesting problems with acceptable cost and accuracy involves a complex interaction among hardware, system software, programming environments, mathematical software libraries, and applications software, all mediated by standards for arithmetic, operating systems, and programming environments. This panel will discuss various issues arising among these various contending points of view, sometimes from the point of view of issues raised during the current IEEE 754R standards revision effort.
Keywords: ieee754, floating point
[117]
E. Jason Riedy. Parallel combinatorial computing and sparse matrices. SIAM Conference on Computational Science and Engineering, February 2005. [ bib | .pdf | .pdf ]
Increasingly, sparse matrix applications produce matrices too large for a single computer's memory. Distributed, parallel computers provide an avenue around memory limitations, but distributing combinatorial algorithms is historically difficult. We use insights from combinatorial optimization to design loosely coupled algorithms for sparse matrix matching, ordering, and symbolic factorization. These algorithms' performance depends on both problem instance and computer architecture. We investigate these aspects of performance and demonstrate issues that affect distributed combinatorial computing.
Keywords: sparse matrix, parallel algorithm, graph analysis
[118]
James W. Demmel, Yozo Hida, W. Kahan, Xiaoye S. Li, Sonil Mukherjee, and E. Jason Riedy. Error bounds from extra-precise iterative refinement. LAPACK Working Note 165, Netlib, February 2005. Also issued as UCB//CSD-05-1414, UT-CS-05-547, and LBNL-56965; expanded from TOMS version. [ bib | .pdf | .pdf ]
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1 / max{10, √(n)}ɛw , the computed normwise (resp. componentwise) error bound is at most 2 max{10, √(n)} ⋅ɛw , and indeed bounds the true error. Here, n is the matrix dimension and ɛw is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.
[119]
E. Jason Riedy. Parallel weighted bipartite matching and applications. SIAM Parallel Processing for Scientific Computing, February 2004. [ bib | .pdf | .pdf ]
Bipartite matching is one of graph theory's workhorses, occuring in the solution or approximation of many problems. Increasingly, applications' data spans multiple memory spaces, but there is little recent experience with distributed matching algorithms. We present a distributed, parallel implementation for weighted bipartite matching based on Bertsekas's auction algorithm. The bidding process finds local matchings while summarizing updates for occasional communication, leading to superlinear speed-ups on some sparse problems and modest performance on others.
Keywords: sparse matrix, parallel algorithm, graph analysis
[120]
E. Jason Riedy. Sparse data structures for weighted bipartite matching. SIAM Workshop on Combinatorial Scientific Computing, February 2004. [ bib | .pdf | .pdf ]
Keywords: sparse matrix, graph analysis
[121]
E. Jason Riedy. Practical alternatives for parallel pivoting. SIAM Annual Meeting, June 2003. [ bib | .pdf | .pdf ]
Traditional pivoting during parallel, unsymmetric LU factorization introduces heavy communication and restructuring costs. Possible alternatives include pre-pivoting to place heavy elements along the diagonal and limited pivoting that maintains the factors' structures. Each alternative comes with trade-offs that affect accuracy and performance.
Keywords: sparse matrix, linear algebra, parallel algorithm, graph analysis
[122]
E. Jason Riedy. Parallel bipartite matching for sparse matrix computations. SIAM Conference on Computational Science and Engineering, February 2003. [ bib | .pdf | .pdf ]
Practical and efficient methods exist for parallelizing the numerical work in sparse matrix calculations. The initial symbolic analysis is now becoming a sequential bottleneck, limiting problems' sizes. One such analysis is the weighted bipartite matching used to achieve scalable, unsymmetric LU factorization in Superlu. Applying a mathematical optimization algorithm produces a distributed-memory implementation with explicit trade-offs between speed and matching quality. We present accuracy and performance results for this phase alone and in the context of Superlu.
Keywords: sparse matrix, parallel algorithm, linear algebra, graph analysis
[123]
David Bindel and E. Jason Riedy. Exception handling interfaces, implementations, and evaluation. IEEE-754r revision meeting, August 2002. [ bib | .pdf | .pdf ]
Keywords: floating point, ieee754
[124]
E. Jason Riedy. Parallel bipartite matching for sparse matrix computation. Third Bay Area Scientific Computing Day, March 2002. [ bib ]
Keywords: sparse matrix, parallel algorithm, graph analysis
[125]
E. Jason Riedy. Type system support for floating-point computation. May 2001. [ bib | .pdf ]
Floating-point arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: Store data narrowly, compute intermediates widely, and derive properties widely. Further, we describe a typing system for floating point that both supports and is supported by these rules. A single type is established for all in- termediate computations. The type describes a precision at least as wide as all inputs to and results from the computation. Picking a single type provides benefits to users, compilers, and interpreters. The type system also extends cleanly to encompass intervals and higher precisions.
Keywords: floating point, ieee754
[126]
E. Jason Riedy and Robert Szewczyk. Power and control in networked sensors. Cited, May 2000. [ bib | .pdf ]
The fundamental constraint on a networked sensor is its energy consumption, since it may be either impossible or not feasible to replace its energy source. We analyze the power dissipation implications of implementing the network sensor with either a central processor switching between I/O devices or a family of processors, each dedicated to a single device. We present the energy measurements of the current generations of networked sensors, and develop an abstract description of tradeoffs between both designs.
Keywords: embedded, sensor, IoT, novel architecture
[127]
E. Jason Riedy and Rich Vuduc. Microbenchmarking the Tera MTA. Cited, May 1999. [ bib | .pdf ]
The Tera Multithreaded Architecture, or MTA, addresses scalable shared memory system design with a difierent approach; it tolerates latency through providing fast access to multiple threads of execution. The MTA employs a number of radical design ideas: creation of hardware threads (streams) with frequent context switching; full-empty bits for each memory word; a flat memory hierarchy; and deep pipelines. Recent evaluations of the MTA have taken a top-down approach: port applications and application benchmarks, and compare the absolute performance with conventional systems. While useful, these studies do not reveal the effect of the Tera MTA's unique hardware features on an application. We present a bottom-up approach to the evaluation of the MTA via a suite of microbenchmarks to examine in detail the underlying hardware mechanisms and the cost of runtime system support for multithreading. In particular, we measure memory, network, and instruction latencies; memory bandwidth; the cost of low-level synchronization via full-empty bits; overhead for stream management; and the effects of software pipelining. These data should provide a foundation for performance modeling on the MTA. We also present results for list ranking on the MTA, an application which has traditionally been difficult to scale on conventional parallel systems.
Keywords: parallel algorithm, novel architecture, memory-centric
[128]
Joseph N. Wilson, E. Jason Riedy, Gerhard X. Ritter, and Hongchi Shi. An Image Algebra based SIMD image processing environment. In C. W. Chen and Y. Q. Zhang, editors, Visual Information Representation, Communication, and Image Processing, pages 523–542. Marcel Dekker, New York, 1999. [ bib | .pdf ]
SIMD parallel computers have been employed for image related applications since their inception. They have been leading the way in improving processing speed for those applications. However, current parallel programming technologies have not kept pace with the performance growth and cost decline of parallel hardware. A highly usable parallel software development environment is needed. This chapter presents a computing environment that integrates a SIMD mesh architecture with image algebra for high-performance image processing applications. The environment describes parallel programs through a machine-independent, retargetable image algebra object library that supports SIMD execution on the Lockheed Martin PAL-I parallel computer. Program performance on this machine is improved through on-the-fly execution analysis and scheduling. We describe the relevant elements of the system structure, outline the scheme for execution analysis, and provide examples of the current cost model and scheduling system.
Keywords: image algebra, parallel algorithm
[129]
Joseph N. Wilson and E. Jason Riedy. Efficient SIMD evaluation of image processing programs. In Hongchi Shi and Patrick C. Coffield, editors, Parallel and Distributed Methods for Image Processing, volume 3166, pages 199–210, San Diego, CA, July 1997. SPIE. [ bib | DOI | .pdf ]
SIMD parallel systems have been employed for image processing and computer vision applications since their inception. This paper describes a system in which parallel programs are implemented using a machine-independent, retargetable object library that provides SIMD execution on the Lockheed Martin PAL-I SIMD parallel processor. Programs' performance on this machine is improved through on-the-fly execution analysis and scheduling. We describe the relevant elements of the system structure, the general scheme for execution analysis, and the current cost model for scheduling.
Keywords: image algebra, parallel algorithm

This file was generated by bibtex2html 1.99.