检索历史
全部清除
结果精炼
细化主题
确定
-
确定
-
确定
{{ item.label }}
{{ item.label }}
{{ item.label }}
{{ item.label }}
项目数:483346个
资助金额:$
14873552.71万
当前无检索条件,默认展示近5年的海外基金数
据。输入检索条件,将在全部年段数据中检索
据。输入检索条件,将在全部年段数据中检索
资助金额:海外基金将按照汇率转换成美元计算
财政年份
金额
点击此按钮,我们将根据您当前检索的课题数据生成一份【分析报告】,该报告利用分析、统计、关联等方法,确定海外高影响力的科研机构与专家。借助全球前沿科研情报,帮助您发现隐藏的机会,推动选题工作快速落实
批 准 号:
2312886
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
The field of artificial intelligence (AI) has recently made significant strides, with notable advancements such as large language models like ChatGPT taking the world by storm. However, these breakthroughs would not have been possible without the availability of powerful computing hardware, such as graphics processing units (GPUs). Such hardware has benefited from several decades of technology scaling following Moore's law. As technology approaches its physical limits and AI models require exponentially increasing hardware resources, including computation and storage, alternative computing paradigms with superior energy efficiency and performance are necessary for a sustainable future. Compute-in-memory is one promising approach where computations are directly performed in memory units, eliminating most data movements, a key bottleneck in conventional computers. However, to best exploit the compute-in-memory for acceleration of AI models on the scale of giga-byte to tera-byte levels, it is critical to have high capacity, energy-efficient, and high performance memory technology to fit the models. NAND memory is a form of erasable programmable read-only memory that takes its name from the not-and (NAND) logic gate. The proposed research aims to develop ferroelectric vertical NAND memory to meet these demands and at the same time train students for developing a future workforce for the semiconductor industry.Vertical NAND memory offers the highest density by increasing the number of stacked layers vertically. However, conventional vertical NAND memory based on floating gate or charge trap flash suffers from poor performance, including high write voltage, low speed, and poor endurance, despite their large capacity. To address these issues, this research proposes the development of a vertical NAND flash alternative: the vertical NAND ferroelectric field-effect transistor (FeFET), which achieves high density and high performance simultaneously. By leveraging the recently discovered ferroelectric HfO2, superior performance can be achieved as ferroelectric programming is driven by an applied electric field, which can be energy-efficient and fast. The project aims to design and evaluate vertical NAND FeFET-based compute-in-memory accelerators from devices to architectures, with innovations such as novel cell designs to achieve multi-level cell and variation suppression, vertical NAND array disturb mitigation with a novel array structure, and mapping and benchmarking of various important information processing tasks to the vertical NAND FeFET array. Additionally, this research includes workforce training activities such as lectures and hands-on experience offered to K-12 students and teachers to promote excitement and attract them to the talent pipeline for the semiconductor industry. The proposed research will recruit graduate and undergraduate students via the Research Experience for Undergraduates (REU) program from underrepresented groups, and the knowledge acquired in this project will be distributed through curriculum development and online sharing repositories.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312841
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Deep learning has demonstrated unprecedented performance across various domains in engineering and science. However, the theoretical understanding of their success has remained elusive. Very recently, researchers discovered and characterized an elegant mathematical structure within the learned features and classifiers called Neural Collapse. This phenomenon persists across a variety of different network architectures, datasets, and data domains. This project will leverage the symmetry of Neural Collapse to develop a rigorous mathematical theory to explain when and why it happens and how it can be used to quantify generalization performance and provide guidelines to understand and improve transferability. By advancing the mathematical foundations of deep learning, this project is expected to influence not only the machine learning community, but also related areas such as optimization, signal and image processing, and natural language processing. The project also involves an integrated outreach and education plan, including promoting accessibility and awareness of computing and STEM concepts for K-12 students.This project will expand our understanding of the principles behind non-convex optimization of training deep learning models, and provide new mathematical insights on their generalization and transferability properties, leading to practical implications. In particular, the project is focused on the following three overarching research thrusts: (i) provide a unified framework to analyze convergence guarantees for training deep and overparametrized models through general loss functions to states of neural collapse, first for simplified cases and then for more general deep models that exhibit progressive neural collapse, with multi-labels and data imbalance; (ii) harness the structure of neural collapse to provide tighter generalization bounds for deep models, by characterizing the structure of the resulting classifiers and their mild dependence on the training data, as well as by making natural distributional assumptions; (iii) leverage the generalization of progressive neural collapse to new environments to understand transferability of deep models to new domains and tasks, and develop principled approaches for improving transferability and efficient fine-tuning.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312872
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
The total volume of global digital data created or copied is estimated to double approximately every three years. This rapid growth has led to an increasing need for reliable and universal access to data in personal, enterprise, and scientific environments. To meet these requirements, data synchronization, which refers to the process of maintaining consistency between different versions of data stored on separate hosts, has become a crucial aspect of managing data. However, state-of-the-art synchronization tools have significant shortcomings and inefficiencies, resulting in increased costs and high-latency access. This project aims to develop data synchronization algorithms with optimal communication bandwidth based on error-correcting codes and to broaden the applicability of synchronization to real-world settings where current tools are inadequate. In addition to scientific and technological advances, the project has the potential to facilitate access to distributed storage systems for users with limited access to broadband Internet, such as in rural areas; help reduce energy consumption associated with data transmission; and provide opportunities to engage and train undergraduate researchers. The goals of the project will be achieved through three research thrusts. The first thrust aims to increase the efficiency of data synchronization by designing low-redundancy systematic edit-correcting codes, along with efficient encoding and decoding algorithms. The second thrust focuses on synchronizing compressed data. As conventional compression typically destroys the similarity between related files, the project will develop mutually compatible compression and synchronization methods which, given the prevalence of data compression, have the potential to significantly expand the use of synchronization for large datasets. Finally, the third thrust will address often-overlooked real-world constraints on synchronization from theoretical and practical points of view. In particular, bounds on the information exchange will be established where one party is under communication or computational constraints. Furthermore, incremental and adaptive synchronization protocols will be developed to efficiently synchronize data when the statistics of the stochastic processes governing data update and modification are unknown.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312842
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Deep learning has demonstrated unprecedented performance across various domains in engineering and science. However, the theoretical understanding of their success has remained elusive. Very recently, researchers discovered and characterized an elegant mathematical structure within the learned features and classifiers called Neural Collapse. This phenomenon persists across a variety of different network architectures, datasets, and data domains. This project will leverage the symmetry of Neural Collapse to develop a rigorous mathematical theory to explain when and why it happens and how it can be used to quantify generalization performance and provide guidelines to understand and improve transferability. By advancing the mathematical foundations of deep learning, this project is expected to influence not only the machine learning community, but also related areas such as optimization, signal and image processing, and natural language processing. The project also involves an integrated outreach and education plan, including promoting accessibility and awareness of computing and STEM concepts for K-12 students.This project will expand our understanding of the principles behind non-convex optimization of training deep learning models, and provide new mathematical insights on their generalization and transferability properties, leading to practical implications. In particular, the project is focused on the following three overarching research thrusts: (i) provide a unified framework to analyze convergence guarantees for training deep and overparametrized models through general loss functions to states of neural collapse, first for simplified cases and then for more general deep models that exhibit progressive neural collapse, with multi-labels and data imbalance; (ii) harness the structure of neural collapse to provide tighter generalization bounds for deep models, by characterizing the structure of the resulting classifiers and their mild dependence on the training data, as well as by making natural distributional assumptions; (iii) leverage the generalization of progressive neural collapse to new environments to understand transferability of deep models to new domains and tasks, and develop principled approaches for improving transferability and efficient fine-tuning.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312865
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
The feedback that users provide through their choices (e.g., clicks, purchases) is one of the most common types of data readily available for training autonomous information retrieval and recommendation systems, and it is widely used in online platforms. However, naively training systems based on choice data may only improve short-term engagement, but not the long-term sustainability of the platform and the long-term benefits to its users, content providers, and other stakeholders. In this complex space of problems and competing interests, it is unlikely that there is a single and compact algorithmic solution that is inherently fair or optimal --- for the same reason that our legal codes and tax policies fill sizable libraries. Instead, the project develops a new algorithmic framework to express similarly detailed policies also for AI systems. This framework provides decision-makers with strategic interventions that predictably steer the long-term dynamics of a platform so that they not only optimize engagement in the short term but additionally reflect long-term values set by whatever system of governance oversees the platform. To achieve this goal, the project introduces a macroscopic layer of abstraction for AI platforms under which long-term objectives (e.g., user satisfaction, item fairness, supplier pool size) can be measured and influenced through macroscopic interventions (e.g., exposure allocation, promotion policies for new content, anti-discrimination regulation). Since platforms act at the microscopic level, the project develops new search and recommendation methods that optimally break macro-level interventions into a sequence of micro-level interventions (e.g., rankings). The crucial technical challenge lies in bridging the mismatch in time scales between macro-level interventions (e.g., weeks) and micro-level interventions (e.g., individual requests), which is addressed using machine learning, causal inference, and control theory. This formulation provides a technical layer of abstraction that reduces complexity for both human and automated decision-making at the macro level, enabling strategic reasoning and action. Finally, since optimal actions at any level rely on unbiased and accurate estimates, the project develops new estimators that counteract biases in feedback loops.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312884
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
The field of artificial intelligence (AI) has recently made significant strides, with notable advancements such as large language models like ChatGPT taking the world by storm. However, these breakthroughs would not have been possible without the availability of powerful computing hardware, such as graphics processing units (GPUs). Such hardware has benefited from several decades of technology scaling following Moore's law. As technology approaches its physical limits and AI models require exponentially increasing hardware resources, including computation and storage, alternative computing paradigms with superior energy efficiency and performance are necessary for a sustainable future. Compute-in-memory is one promising approach where computations are directly performed in memory units, eliminating most data movements, a key bottleneck in conventional computers. However, to best exploit the compute-in-memory for acceleration of AI models on the scale of giga-byte to tera-byte levels, it is critical to have high capacity, energy-efficient, and high performance memory technology to fit the models. NAND memory is a form of erasable programmable read-only memory that takes its name from the not-and (NAND) logic gate. The proposed research aims to develop ferroelectric vertical NAND memory to meet these demands and at the same time train students for developing a future workforce for the semiconductor industry.Vertical NAND memory offers the highest density by increasing the number of stacked layers vertically. However, conventional vertical NAND memory based on floating gate or charge trap flash suffers from poor performance, including high write voltage, low speed, and poor endurance, despite their large capacity. To address these issues, this research proposes the development of a vertical NAND flash alternative: the vertical NAND ferroelectric field-effect transistor (FeFET), which achieves high density and high performance simultaneously. By leveraging the recently discovered ferroelectric HfO2, superior performance can be achieved as ferroelectric programming is driven by an applied electric field, which can be energy-efficient and fast. The project aims to design and evaluate vertical NAND FeFET-based compute-in-memory accelerators from devices to architectures, with innovations such as novel cell designs to achieve multi-level cell and variation suppression, vertical NAND array disturb mitigation with a novel array structure, and mapping and benchmarking of various important information processing tasks to the vertical NAND FeFET array. Additionally, this research includes workforce training activities such as lectures and hands-on experience offered to K-12 students and teachers to promote excitement and attract them to the talent pipeline for the semiconductor industry. The proposed research will recruit graduate and undergraduate students via the Research Experience for Undergraduates (REU) program from underrepresented groups, and the knowledge acquired in this project will be distributed through curriculum development and online sharing repositories.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312927
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
In a world where Artificial Intelligence (AI) and High-Performance Computing (HPC) hold immense potential for transformative advancements, this research aims to develop CONCERT, an innovative communication and compression stack, to unlock the full power of heterogeneous architectures and drive high performance and scalability. By leveraging emerging accelerators and networking hardware, CONCERT seeks to address fundamental challenges in utilizing heterogeneous architectures, scaling communication, and integrating application agnostic on-the-fly data compression. The project's significance lies in its potential to advance the field of AI and HPC by enabling efficient utilization of heterogeneous resources, resulting in enhanced performance and scalability. CONCERT's impact extends beyond scientific advancements. The project will provide valuable guidelines for designing and deploying next-generation HPC systems, benefiting users in academia and industry. By actively promoting diversity and inclusion, particularly among underrepresented minorities and female students, the project fosters a more inclusive STEM environment. The research outcomes will contribute to curriculum advancements, supporting education and research in HPC, Deep/Machine Learning, and Data Analytics. Additionally, the dissemination of results to collaborating organizations will positively impact their HPC software applications, benefiting society as a whole.Over the last few years, Artificial Intelligence (AI) and High-Performance Computing (HPC) applications have been continuously enhanced for performance by exploiting the latest trends in highly heterogeneous hardware in modern HPC systems. These applications have high communication requirements and exchange massive amounts of data given a cluster’s limited bandwidth. However, it is challenging for an application to efficiently use all resources available in the system to scale up communication with the emerging on-the-fly compression support. For this reason, an adaptive communication/compression stack called CONCERT (sCalable cOmmunicatioN Runtimes with On-the-fly Compression for HPC and AI Applications on hEterogenous aRchiTectures) is proposed. CONCERT dynamically employs dedicated resources through load and architectural aware Functional Partitioning (FP). It enhances the existing de-facto standard for programming large-scale applications using the Message Passing Interface (MPI). Specific issues to be focused under this research are: 1) Efficient support for MPI/Hybrid programming models on heterogeneous hardware to scale-up communication and on-the-fly compression, 2) Designing FP-based schemes to offload communication /compression tasks, 3) Designing a communication/compression FP scheme to support scale-up requests from thousands of endpoints, and 4) Studying the benefits of these schemes in terms of performance and scalability. The transformative impact of the proposed research enables a broad range of AI and HPC applications to efficiently and transparently leverage the emerging accelerators and networking hardware from multiple vendors. A strong software distribution and data dissemination plan is also proposed to have a broader impact on academic and industrial HPC/AI communities.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312942
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Non-technical SummaryThe continuous creation and development of tools through the tuning of materials’ properties has been a cornerstone of much of the development seen in human societies. High mechanical hardness is a very desirable property for materials used in industrial settings for machining and cutting as it dramatically reduces wear and therefore turnover rate of machining tools. The industry standard superhard material is diamond, the hardest material currently known. The issue that arises with diamond, however, is that not only does it have an expensive high pressure, high temperate synthesis, but it is limited in its applications. This is because it is thermally unstable in air and when used to cut iron containing materials, diamond breaks down to form iron carbides. These both result in a high turnover rate for diamond tools, and an inability to be used with common iron containing materials, like steel. Cheaper alternatives such as tungsten carbide (WC) have an easier, low-cost synthesis, but lack the extremely high hardness values of diamond and therefore have high turnover rates and are less effective. With this project, supported by the Solid State and Materials Chemistry program and the Ceramics program, both in NSF’s Division of Materials Research, the principal investigators design and create superhard materials that approach the high hardness seen in diamond, while replicating the low cost, ambient pressure synthesis found in WC. These superhard materials made from boron not only lower the cost of synthesis, but they improve the lifetime of the tools that can be created and therefore, lower the amount of waste generated in industrial machining. Additionally, the metallic nature of these transition metal borides enables the use of high precision cutting and shaping instruments like plasma cutting, which is currently not usable with electrically insulating materials like diamond, which additionally reduces cost and waste in the formation of these tools. Beyond this research, the principal investigators undertake educational outreach in the greater Los Angeles area. This includes developing lessons and experiments for K-12 schools and presenting them to teachers, along with speaking to students in grade school about not just science, but higher education as a whole. Graduate students who work on this project also gain valuable skills through both the research they conduct as well as through the mentorship and outreach programs they participate in alongside their mentors. Technical SummaryHardness is a mechanical property that is defined by a material’s ability to resist irreversible shape change, known as plastic deformation. The hardness of a given material is dependent on several different materials’ properties, but they can overall be grouped into two categories: intrinsic bonding effects and grain boundary effects. These two contributors to hardness are not mutually exclusive and therefore can be optimized separately and combined to dramatically improve the hardness of a material. This project, with support from the Solid State and Materials Chemistry program and the Ceramics program, both in NSF’s Division of Materials Research, uses the described two-pronged approach towards superhard materials design and combines the synthesis of transition metal boride systems with high-pressure studies to obtain information about the internal deformation mechanisms of bulk and nanocrystalline materials. The research groups at UC Los Angeles study how small element doping into the boron sites of the metal borides affects the bonding. Using systems of di- and tetra- borides with varying amounts of carbon in them, the principal investigators investigate the different carbon bonding regimes, and their impact on hardness. Additionally, synthetic routes for the formation of nanostructured metal borides are explored. The principal investigators utilize new synthetic routes to create nanocrystalline forms of known superhard metal borides such as ReB2, WB2 and WB4 to further increase the hardness of these materials by maximizing the number of grain boundaries which can impede plastic deformation. These nanocrystalline materials also allow for new analytical techniques which are not possible for bulk materials such as Rietveld texture analysis. These two approaches to hardening metal borides can then be combined to create nanocrystalline solid solutions which benefit from both the improved bonding effects and grain boundary effects. The broader impacts of the project are multifaceted and include extensive outreach conducted by the principal investigators aimed at grade school children, the training of both graduate and undergraduate students in their Ph.D. studies and undergraduate research opportunities, respectively, and the development of novel superhard materials which have the potential to improve the quality of industrial manufacturing and machining tools.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312932
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Data-driven systems employ algorithms to aid human judgment in critical domains like hiring and employment, school and college admissions, credit and lending, and college ranking. Because of their impacts on individuals, population groups, institutions, and society at large, it is critical to incorporate fairness, accountability, and transparency considerations into the design, validation, and use of these systems. Current research in this area has mainly focused on classification and prediction tasks. However, scoring and ranking are also used widely, and raise many concerns that methods designed for classification cannot handle because classification labels are applied one item at a time, whereas ranking is explicitly designed to compare items. This project is focused on algorithmic score-based rankers that sort a set of candidates based on a “simple” scoring formula. Such rankers are widely used in critical domains because of the premise that they are easier to design, understand, and justify than complex learned models. Yet, even these seemingly simple and transparent rankers may produce counter-intuitive results, unfairly demote candidates that belong to disadvantaged groups, and be prone to manipulation due to sensitivity to slight changes in the input data or in the scoring formula. Addressing these issues is challenging due to the interplay between the data being ranked and the ranker, the complex structure within the data, and the need to balance multiple objectives.This project considers the core technical challenges inherent in the responsible design and validation of algorithmic rankers, and pursues three synergistic aims. Aim 1 is to develop methods to quantify the impact of item attributes, and of specific engineering choices regarding attribute representation and pre-processing, on the ranked outcome (validation). This information is then used to guide the data scientist in selecting a scoring function that corresponds to their understanding of quality or appropriateness (design). Aim 2 is to develop methods to quantify the impact of data uncertainty, of slight changes in the scoring formula, or both, on the ranked outcome (validation). This information is then used to guide the data scientist in intervening on data acquisition and pre-processing to reduce uncertainty, and in selecting a scoring function that is sufficiently stable (design). Aim 3 is to develop methods to quantify lack of fairness in ranked outcomes, with respect to candidates from under-represented or historically disadvantaged groups, in view of multiple fairness objectives and potential intersectional discrimination (validation). This information is then used to identify feasible trade-offs and assist the data scientist in navigating these trade-offs to enact fairness-enhancing interventions (design). Outcomes of this work will impact the practice of scoring and ranking in critical domains like educational program admissions, hiring, and college ranking. Insights from this work will enable technical interventions when appropriate, and also identify cases where they are insufficient, and where more data should be collected or an alternative screening process should be used. This project will also include teaching and mentoring, public education and outreach, and broadening participation of members of under-represented groups in computing.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312978
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
NextG cellular networks must support a wide variety of emerging applications, such as augmented reality, autonomous vehicles and remote healthcare, which require radio access with latency, throughput and reliability guarantees hitherto unavailable. Simultaneously, the wireless environment is becoming increasingly dynamic over diverse spectrum bands, user mobility and variable traffic patterns. Complex cross layer interactions imply tractable models are unavailable, and a machine learning approach to optimal resource utilization is critical. This project first develops an open, simple and capable platform, entitled EdgeRIC that supports fine-grain decision making at multiple timescales across the cellular network stack, and second, develops a structured machine learning based approach over this platform that optimally utilizes all system resources to maximize diverse application performance. The project is enhanced by an education plan focusing on machine learning and wireless networking and coordinating workshops and tele-seminars for the research community and industry professionals to disseminate their ideas. Simultaneously, outreach in the form of summer camps and seminars for high school students focusing on machine learning enhances the impact of this project in STEM fields.The project aims at enabling intelligent decision making and control in cellular networks at realtime ( 1ms), while supporting training and adaptation at near-realtime (10ms - 1s) and non-realtime ( 1s). It brings together mathematical methods to develop and analyze reinforcement learning (RL) algorithms and systems development to integrate them into the cellular stack. The project addresses the key challenges of doing so via three main themes. The first focuses on realtime RL algorithms that schedule resources based on the relative priorities of applications, using the structure of the optimal policy to promote fast and scalable learning. The second theme focuses on robust and fast adaptation of these policies, which must operate over dynamic environments and application needs. The third theme addresses scalable learning to determine hierarchical policies operating across the network layers and sites. The themes all come together on a platform, entitled EdgeRIC for implementing multi-modal learning algorithms using the standardized OpenAIGym toolkit. The immediate impact of this project is in creating multi-timescale learning and control for the next generation of cellular networks. This project also advances the fundamental theory of meta and federated RL. The project supports seminars and summer camps for outreach, development of new courses focusing on machine learning for wireless communication, and coordination of workshops and tele-seminars for the research community and industry professionals to disseminate research ideas.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312982
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Scientists conduct analyses that rely on large-scale simulations to achieve breakthroughs in multiple scientific domains, such as climate, energy, quantum physics, and more. As system complexity increases, future large-scale systems and the data generated, processed, stored, and transmitted by them are subject to increasingly higher occurrences of soft errors or silent data corruption. Importantly, this silently compromised data may go undetected because current High-Performance Computing (HPC) software stacks largely lack mechanisms to inform scientists of silent data corruption that could adversely affect the integrity of their scientific interpretation. In order to combat silent data corruption in HPC systems, this project introduces highly efficient and cost-effective mechanisms to monitor and detect soft errors. Through the use of unsupervised error detection, this project increases scientists’ confidence in extreme-scale scientific simulations and data analyses, which advance the data-intensive science discovery needed to solve some of the world’s most complex contemporary problems, such as predicting severe weather conditions, designing new materials, making new energy sources pragmatic, and others. The methodologies of this project are also applicable to general-purpose computing systems, increasing security and reliability on traditional computing and Internet of Things devices.This research applies compressive sensing and machine learning, especially an unsupervised approach, to accurately detect soft and hardware errors in current and future HPC systems. A compact representation that corresponds to the original dataset is efficiently obtained through compressive sensing coupled with a hardware-assisted data collection mechanism that requires no changes to existing infrastructure. This is used with a spatiotemporal anomaly detection model for in situ characterization of soft errors and errors caused by a hardware malfunction, detecting anomalies deviating from acceptable ranges. The approach is built into the scientific workflow and operates seamlessly with the application without requiring application modification or customization. Validation of the mechanism across multiple HPC platforms using scientific workflows allows scientists to analyze and verify their datasets with increased levels of trust.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313024
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Emerging applications are pushing the limits of high-performance computing. New hardware accelerators are being developed to handle particular workloads much more efficiently than can be achieved in software, but it is expensive to develop these hardware units and the software that connects to them. In fact, today the majority of hardware-development budgets go to the combination of finding and fixing hardware bugs and developing software support for the new hardware. This project studies how to improve that whole development process with end-to-end formal verification, where machine-checked mathematical proofs establish correct behavior for the whole hardware-software stack. The research team is specifically concerned with tensor computations, as appear in graphics and machine learning. The project's novelties are in extending the idea of end-to-end mechanized proof for the first time to cover hardware accelerators, specifically tensor processing units (TPU). The project's impacts are the potential for dramatic lowering of the costs of developing new hardware accelerators or iterating on their implementations over time, while providing strong mathematical correctness guarantees to applications, e.g., the tools to show that a machine-learning system protects user privacy, despite the use of complex performance optimizations.Three main levels of computing system are covered by the project, all with logical specifications and proofs that are to be composed into system-level theorems in the Coq proof assistant. The top level is a source programming language called Exo, which allows programmer-guided optimization of nested-loop programs, where appropriate use of accelerators is gradually introduced through rewrite rules. General optimization tactics, or reusable transformation procedures, are being developed alongside their proofs. The middle level is the Bedrock2 programming language, which is similar to the C language, with formal support for external functions that can be used to model hardware facilities. That mechanism is being extended to support modern accelerator interfaces, in contrast to the simpler, embedded-systems-oriented interfaces of past work. Finally, processors and accelerators are verified, requiring new developments in modular specification and proof of hardware.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
未结题
批 准 号:
2313061
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
As commercial and scientific applications generate data at increasingly high rates, the carbon footprint associated with data movement is becoming a critical concern, particularly for High-Performance Computing (HPC) and Cloud data centers. While there is substantial research focusing on power management techniques at the hardware level and lower networking stack layers during data transfers, little attention has been paid to energy-saving measures at the application layer for computing systems such as servers, HPC centers, and Cloud data centers during network data transmission. The existing strategies in this realm are either prohibitively expensive, impractical in the short term, or sacrifice performance in pursuit of increased energy efficiency. This project develops an innovative application-layer solution, which is cost-effective, practical for immediate deployment, and importantly, does not compromise performance while boosting energy efficiency. It possesses the ability to adaptively fine-tune several application-layer and kernel-layer transfer parameters, thereby guaranteeing efficient utilization of computing and networking resources. This, in turn, minimizes data transfer energy consumption without undermining end-to-end performance. This revolutionary approach to energy-efficient data transfers underscores the innovation and transformative potential of this project. The models, algorithms, and tools developed within this project are poised to augment performance and reduce power consumption during end-to-end data transfers, potentially saving gigawatt hours of energy and contributing millions of dollars in savings to the US economy. Furthermore, this project seeks to permeate research insights across all tiers of education. The well-structured research activities promise to benefit for K-12, undergraduate, and graduate students alike, fostering their academic growth and nurturing future scientists in this critical field.This project develops novel application-layer models, algorithms, and tools for (1) prediction and tuning of the best cross-layer transfer parameter combination for energy-efficient and high-performance data transfers at the HPC and Cloud data centers; (2) a deep reinforcement learning-based approach that can adapt to the dynamically changing conditions in a wide range of network and end system configurations; (3) accurate estimation of the accompanying network device power consumption due to changing data transfer rate on the active intra- and inter-data center network links and dynamic readjustment of the transfer rate to balance the energy vs. performance ratio; and (4) a suite of service level agreement based energy-efficient transfer algorithms to the HPC administrators and Cloud service providers for dynamically adjustable performance and energy efficiency goals. The evaluation and validation of the proposed models and algorithms are performed in realistic scenarios in collaboration with the HPC Center at Texas Tech University and the Distributed Cloud Management group at IBM. The research outcomes of this project will fill a significant gap in the data transfer energy efficiency in HPC and Cloud data centers. This project's eventual goal is to translate the research activities into robust, production-quality software libraries that will reduce the carbon footprint of data movement for a range of user communities dealing with large amounts of data. The project will enable wider broader impacts through the development of graduate and undergraduate curricula, K-12 outreach programs, summer boot camps, the recruitment of minority groups, and broadening participation in computing.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2312994
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Supercell thunderstorms are responsible for the majority of significant tornadoes in the United States. However, not all supercells form tornadoes and the false alarm rate for tornado warnings is stubbornly high. This small field campaign will use a variety of instrumentation, including radar trucks and uncrewed aerial systems (UAS) to observe atmospheric boundaries that form in what is known as the left flank region of a supercell. The idea is that observing small scale motions and boundaries near supercells will allow us to better understand why some supercells form tornadoes while others do not. Beyond the practical operational forecasting aspect of this project, the field campaign will provide 15-20 students with the opportunity to participate in the collection of data. This award is for the Targeted Observations by Radars and UAS of Supercells Left-flank-Intensive Experiment (TORUS-LItE), an observational study of specific processes in the left flank of supercell thunderstorms and how they relate to tornadogenesis. The research team will deploy UAS, radar, sounding, and mobile Mesonet assets during a three-week period in May and June 2023 in the Great Plains to directly sample boundaries and coherent structures. The investigators believe that advancing understanding of the relationship between supercell tornadogenesis and left/forward flank boundaries and coherent structures requires cross-sectional datasets that enable characterization of the structure and evolution of these boundaries and associated coherent structures in the context of supercell/mesocyclone strength and proximate evolution of near-surface rotation. The science questions to be studied are: A) How often are left/forward-flank boundaries coincident with enhanced vorticity and how does the magnitude of this vorticity relate to the characteristics of the associated boundaries and environmental/ambient state? B) Do the characteristics of streamwise vorticity currents (SVCs) and associated boundaries attendant to tornadic storms differ significantly from those associated with non-tornadic storms? C) Can the dense-side airmasses of left-flank boundaries be characterized as density currents and are SVCs the attendant Kelvin-Helmholtz (KH) billows?This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313084
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Quantum computing is considered one of the most promising alternatives to go beyond the Moore’s Law scaling and provide drastic acceleration for selected applications and further the information technology revolution. The groundbreaking research carried out over the past four decades indicates that large-scale quantum systems may be used for far-reaching applications ranging from simulations of complex quantum matter to general purpose quantum information processing. Several quantum hardware platforms have made substantial advances in the past decade. Neutral atoms trapped in arrays of optical tweezers have recently emerged as an exceptionally promising experimental platform for programmable quantum simulations and quantum computation. These systems are readily scaled to large numbers and demonstrated experimentally that the qubit coupling for entanglement can be reconfigured dynamically during the quantum computation process, thus, are named dynamically reconfigurable atom arrays (DRAAs). DRAA introduces a number of unique opportunities. In particular, it supports a cache-compute computation model, where temporary data can be “cached” in a specific atom array for later computation, mimicking the architecture of modern CPUs. Moreover, algorithms involving error-corrected logical qubits can be implemented very efficiently, with a number of controls that scales with a number of logical (rather than physical) qubits. However, to take full advantage of this unique architecture, novel methods for compilation need to be developed, as programming a DRAA involves not only qubit placement and gate scheduling, but also atom movement. In addition, error correction needs to be considered and optimized under the constraint of available resources.This project aims at developing a novel DRAA compiler that simultaneously considers the problems of qubit placement, gate scheduling, atom movement, and selected error correction under a common compilation framework. In particular, it addresses four interrelated problems, including (i) Scalable compilation for DRAA that can efficiently support mapping, scheduling, and atom movement for DRAAs with hundreds to tens of thousands of atoms; (ii) Efficient support of the cache-based DRAA architecture, which has a memory zone, an entanglement zone, and a readout zone, with data reuse and data movement optimization; (iii) Customized support for hardware-efficient error correction on DRAAs that takes full advantage of atom movement capability, transversal property, and DRAA-specific error-biasing; and (iv) Selective error correction under resource constraints, where error criticality is analyzed and identified. The algorithms and compilation flow will be tested experimentally on the DRAA quantum computer developed at Harvard University. The project is an interdisciplinary collaboration effort by a team of researchers from the University of California Los Angeles (UCLA) Computer Science Department and the Harvard Physics Department. The investigators plan to integrate the research with education to expose students to the exciting opportunities of quantum computing and train a new generation of students so that they have deep knowledge in both quantum computing device technologies and large-scale design automation and optimization. The research results from this project will be disseminated widely via publications and tutorials at various conferences. The team will further facilitate the technology transfer and community-wide participation using open-source releases of both the compilation system and the DRAA experimental data developed under this project. Finally, the investigators plan to broaden the participation in computing via high-school summer programs and partnerships with various diversity and outreach programs, such as the Center for Excellence in Engineering and Diversity at UCLA and CUAEngage at Harvard.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313120
财政年份:
2024
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Non-technical: As Antarctica’s ice sheets melt, freshwater enters the ocean, which can increase sea ice cover adjacent to the ice sheet. Sea ice formation changes the density of sea water thus regulating the circulation of water throughout the global ocean. Sea ice expansion may also slow down the processes driving ice sheet melt. This research will examine the relationship between surface ocean cooling and ocean freshening from geological records recovered adjacent, as well as farther seaward from the ice margin. These records will provide key information regarding the role in sea ice expansion during the process of deglaciation. The geological records span time intervals when climate conditions were similar to predicted future warming scenarios. Therefore, these new records will provide perspective on what the consequences of an ice sheet melting and surface ocean freshening might be in the near-term (100 years) and long-term (100-years) future. Broader impacts will include training for undergraduate researchers, dissemination of research findings through the Time Scavengers website and hands-on community science events, and engagement with school teachers and students. Technical Description: Ocean warming and Antarctic ice mass loss over the last few decades has resulted in surface freshening in the Ross Sea sector of the Southern Ocean. Surface ocean freshening is linked to sea ice expansion, which has an important role in regulating global ocean circulation and is hypothesized to be a negative climate feedback slowing ice retreat. The forcings and feedbacks involved in Southern Ocean warming, ice shelf melt, surface ocean freshening, sea ice growth, and deep water formation remain poorly understood due to the short-term nature of instrumental records. The objective of this research program is to: (1) generate multi-proxy Plio-Pleistocene sea surface temperature and meltwater records in ice-proximal and ice-marginal settings during deglacial and interglacial intervals from archived sediment cores of ANDRILL AND-1B as well as International Ocean Discovery Program (IODP) U1524, and (2) examine the external forcing mechanisms and orbital pacing of sea surface temperature and meltwater variability during the Plio-Pleistocene. Results from this project aim to better understand ocean-ice sheet driven dynamics under varying climate scenarios of the most recent geological past.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313131
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Supervised machine learning has found widespread application, often achieving state-of-the-art performance. However, these algorithms rely on labeled training instances, which can be challenging to acquire. Labeled instances are often done by humans and require time and money to obtain. Active Learning strives to minimize labeling costs by identifying the most informative instances for annotation. While Active Learning techniques have shown promise in producing high-performance models with fewer labels, their applications remain constrained due to the necessity for multiple interaction rounds with annotators, which can be time-consuming or infeasible. This project aims to advance Active Learning algorithms and understanding of their fundamental capabilities in scenarios with limited interaction rounds. A broad spectrum of machine learning applications is expected to benefit from the results of this research, reducing the time and cost associated with obtaining sufficient data for training accurate models. Additionally, this project engages underrepresented minority students through hands-on research and learning activities, develops course modules on resource-efficient machine learning, and disseminates our findings to industry and academia via an extensive online Active Learning tutorial.This project will launch a comprehensive investigation of few-round active learning, where the learner can actively request feedback on specific data points within a limited number of rounds. To achieve this, the project will interleave two algorithmic tasks: robust data utility quantification and planning with limited adaptivity. First, the investigators will explore methods to measure the utility of unlabeled data, taking into account data size, underlying data characteristics, and downstream learning tasks. Subsequently, the team will develop algorithms that optimize the data utility metric while simultaneously improving the metric's quality over time in a few-round active learning setting. The project findings will establish principled approaches for addressing a novel exploration-exploitation dilemma specific to few-round active learning and provide a fundamental understanding of adaptivity's role in budgeted learning. Finally, the project will evaluate the proposed approaches across various high-impact machine learning applications, including autonomous driving, smart buildings, dialog systems, and biochemical engineering.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313124
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Today's large-scale simulations are producing vast amounts of data that are revolutionizing scientific thinking and practices. For instance, a fusion simulation can produce 200 petabytes of data in a single run, while a climate simulation can generate 260 terabytes of data every 16 seconds with a 1 square kilometer resolution. As the disparity between data generation rates and available I/O bandwidths continues to grow, data storage and movement are becoming significant bottlenecks for extreme-scale scientific simulations in terms of in situ and post hoc analysis and visualization. The disparity necessitates data compression, which compresses large-scale simulations data in situ, and decompresses data in situ and/or post hoc for analysis and exploration. On the other hand, a critical step in extracting insight from large-scale simulations involves the definition, extraction, and evaluation of features of interest. Topological data analysis has provided powerful tools to capture features from scientific data in turbulent combustion, astronomy, climate science, computational physics and chemistry, and ecology. While lossy compression is leveraged to address the big data challenges, most existing lossy compressors are agnostic of and thus fail to preserve topological features that are essential to scientific discoveries. This project aims to research and develop advanced lossy compression techniques and software that preserve topological features in data for in situ and post hoc analysis and visualization at extreme scales. The success of this project will promote scientific research on driving applications in cosmology, climate, and fusion by enabling efficient and effective compression for scientific data, and the impact scales to other science and engineering disciplines. Furthermore, the research products of this project will be integrated into visualization and parallel processing curricula, disseminated via research and training workshops, and used to attract underrepresented students for broadening participation in computing. This project tackles the data compression, analysis, and visualization needs in extreme-scale scientific simulations by developing a suite of topology-aware data compression algorithms for scalar field and vector field data. Such algorithms effectively reduce the size of data while preserving critical features defined by topological notions. This project will define and enforce topology-aware constraints over advanced lossy compression algorithms. Such capabilities have not been studied systematically within today’s data compression paradigm. This project will impact specific fields, including computational science, data analysis, data compression, and visualization, and the broader scientific community. The research products of this project will be delivered as publicly available software to significantly advance the research cyberinfrastructure for current and upcoming exascale systems. This project will foster novel discoveries in multiple scientific disciplines beyond cosmology, climate, and fusion by enabling efficient and effective compression on a wide range of platforms.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313146
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
Non-volatile memory is a type of computer memory that can retain stored data upon a power loss or system crash. Due to its large capacity and low energy footprint compared to traditional volatile memory, non-volatile memory has long been envisioned as an ideal solution for building large-scale, cost-effective, energy-efficient, and recoverable applications in many critical domains, including high-performance computing, machine learning, and embedded systems. Although non-volatile memory is available as commercial memory chips and offers numerous promises, it has not yet been widely adopted in production systems. The major obstacle is the difficulty to ensure that data is timely and correctly written to non-volatile memory, allowing it to be restored to a consistent state after a crash. Currently, application developers carry the burden of porting legacy applications to non-volatile memory, which is tedious and error-prone. This project seeks to establish a generic framework for user-transparent persistence and crash consistency that allows unmodified legacy applications to run efficiently and correctly with non-volatile memory. The success of this project will help unleash the full potential of non-volatile memory and make it easier to adopt. The research will also provide valuable insights into data management in future hybrid, disaggregated memory systems. In addition, this project involves mentoring Ph.D. students, engaging minority students, course development, and K12 outreach activities. This project integrates non-volatile memory into the page/buffer cache in memory management – i.e., an abstraction that bridges the view of byte-addressable memory and a backing memory device -- to provide persistence and crash consistency to user-space programs with no or little user involvement. The challenges lie in 1) how to intercept program updates and redirect them to non-volatile memory for persistence; 2) how to properly order the updates and ensure update atomicity to guarantee crash consistency; 3) how to efficiently integrate non-volatile memory into page/buffer cache management without incurring noticeable overhead or performance degradation. This project addresses these challenges by focusing on persisting three types of program data – file-backed data, dynamically allocated application memory, and program metadata for virtual memory management, such as page tables, and exploring various software and hardware techniques, such as copy-on-write, undo logging, shadow paging, and extended page tables, for each data type to achieve efficient crash consistency. This project advances the understanding of hybrid memory management for volatile and non-volatile memories while simultaneously achieving high usability, good backward compatibility, and high efficiency.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
批 准 号:
2313156
财政年份:
2023
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
This project aims to create an advanced software tool that automates the design and optimization of electromagnetic (EM) systems, with particular emphasis on magnetic resonance imaging (MRI) an essential medical imaging technology. EM systems are broadly applied and include cell phone antennas, 5G networks, and medical imaging devices. Currently, the design and optimization of these systems are labor-intensive and require a high level of user expertise and interaction. Our software aims to simplify and automate the process. The software will be tested and validated by addressing an open problem in MRI: the optimization of radiofrequency (RF) coil design, a keyl part of MRI machines that significantly impacts the quality of images produced. Current manual optimization processes for RF coils are inefficient and results are suboptimal leading to longer scanning times and lower accuracy. This project will not only make advancements in technology and improve the design process of EM systems, but it also supports interdisciplinary training by involving students from various disciplines. Further, the project holds potential societal benefits in healthcare, cognitive neuroscience, and other sectors that rely on MRI performance.The project's objective is to develop and validate a software pipeline for the shape optimization of EM systems, particularly focusing on automating the forward simulation and the inverse problem of parameter optimization. The proposal combines geometry processing techniques and advanced EM simulations to automate this optimization. The approach will involve novel techniques for differentiable EM simulation, machine learning for acceleration, and shape modeling for automatic geometric variations exploration. The focus of the project will be on RF coil design in MRI machines, particularly those operating at high frequencies (3T and 7T), where existing coil designs only achieve 70-80% of the optimum signal-to-noise ratio. The team will fabricate and test an optimized 7T coil design and compare its performance with commercial RF coils. The project will also introduce an innovative adjoint formulation for efficient shape gradient computation, enabling gradient-based optimization for EM systems with hundreds of design parameters. The results of this project will include an open-source software suite for EM system design and optimization, and it is expected to impact other research projects and contribute to educational and training resources.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
关 键 词:
项目数:{{ pagination.total }}个
资助金额:$
{{ supportNum }}万
当前无检索条件,默认展示近5年的海外基金数
据。输入检索条件,将在全部年段数据中检索
据。输入检索条件,将在全部年段数据中检索
资助金额:海外基金将按照汇率转换成美元计算
财政年份
金额
点击此按钮,我们将根据您当前检索的课题数据生成一份【分析报告】,该报告利用分析、统计、关联等方法,确定海外高影响力的科研机构与专家。借助全球前沿科研情报,帮助您发现隐藏的机会,推动选题工作快速落实
批 准 号:
{{ info.ratify_no || '--' }}
财政年份:
{{ info.approval_year || '--' }}
项目类别:
负 责 人:
依托单位:
学科分类:
金额:
资助国家
资助机构
项目摘要:
{{ info.abstract_c || info.abstract_e }}
关 键 词: