The expense of quantum chemistry calculations significantly hinders the search for novel catalysts. Here, we provide a tutorial for using an easy and highly cost-efficient calculation scheme called alchemical perturbation density functional theory (APDFT) for rapid predictions of binding energies of reaction intermediates and reaction barrier heights based on Kohn-Sham density functional theory reference data. We outline standard procedures used in computational catalysis applications, explain how computational alchemy calculations can be carried out for those applications, and then present bench marking studies of binding energy and barrier height predictions. Using a single OH binding energy on the Pt(111) surface as a reference case, we use computational alchemy to predict binding energies of 32 variations of this system with a mean unsigned error of less than 0.05 eV relative to single-point DFT calculations. Using a single nudged elastic band calculation for CH4 dehydrogenation on Pt(111) as a reference case, we generate 32 new pathways with barrier heights having mean unsigned errors of less than 0.3 eV relative to single-point DFT calculations. Notably, this easy APDFT scheme brings no appreciable computational cost once reference calculations are done, and this shows that simple applications of computational alchemy can significantly impact DFT-driven explorations for catalysts. To accelerate computational catalysis discovery and ensure computational reproducibility, we also include Python modules that allow users to perform their own computational alchemy calculations.Keywords --- Computational catalysis, density functional theory (DFT), adsorption energies, nudged elastic band calculations, binding energies, barrier heights
Engineering microscopic collectives of cells or microrobots is challenging due to the often-limited capabilities of the individual agents, our inability to reliably program their motion and local interactions, and difficulties visualising their behaviours. Here, we present a low-cost, modular and open-source Dynamic Optical MicroEnvironment (DOME) and demonstrate its ability to augment microagent capabilities and control collective behaviours using light. The DOME offers an accessible means to study complex multicellular phenomena and implement de-novo microswarms with desired functionalities. Corresponding author(s) Email: email@example.com firstname.lastname@example.org
Interacting with surrounding road users is a key feature of vehicles and is critical for intelligence testing of autonomous vehicles. The Existing interaction modalities in autonomous vehicle simulation and testing are not sufficiently smart and can hardly reflect human-like behaviors in real world driving scenarios. To further improve the technology, in this work we present a novel hierarchical game-theoretical framework to represent naturalistic multi-modal interactions among road users in simulation and testing, which is then validated by the Turing test. Given that human drivers have no access to the complete information of the surrounding road users, the Bayesian game theory is utilized to model the decision-making process. Then, a probing behavior is generated by the proposed game theoretic model, and is further applied to control the vehicle via Markov chain. To validate the feasibility and effectiveness, the proposed method is tested through a series of experiments and compared with existing approaches. In addition, Turing tests are conducted to quantify the human-likeness of the proposed algorithm. The experiment results show that the proposed Bayesian game theoretic framework can effectively generate representative scenes of human-like decision-making during autonomous vehicle interactions, demonstrating its feasibility and effectiveness. Corresponding author(s) Email: email@example.com
Memristive devices being applied in neuromorphic computing are envisioned to significantly improve the power consumption and speed of future computing platforms. The materials used to fabricate such devices will play a significant role in their viability. Graphene is a promising material, with superb electrical properties and the ability to be produced sustainably. In this paper, we demonstrate that a fabricated graphene-pentacene memristive device can be used as synapses within Spiking Neural Networks (SNNs) to realise Spike Timing Dependent Plasticity (STDP) for unsupervised learning in an efficient manner. Specifically, we verify operation of two SNN architectures tasked for single digit (0-9) classification: (i) a simple single-layer network, where inputs are presented in 5x5 pixel resolution, and (ii) a larger network capable of classifying the Modified National Institute of Standards and Technology (MNIST) dataset, where inputs are presented in 28x28 pixel resolution. Final results demonstrate that for 100 output neurons, after one training epoch, a test set accuracy of up to 86% can be achieved, which is higher than prior art using the same number of output neurons. We attribute this performance improvement to homeostatic plasticity dynamics that we used to alter the threshold of neurons during training. Our work presents the first investigation of the use of green-fabricated graphene memristive devices to perform a complex pattern classification task. This can pave the way for future research in using graphene devices with memristive capabilities in neuromorphic computing architectures. In favour of reproducible research, we make our code and data publicly available https://anonymous.4open.science/r/c69ab2e2-b672-4ebd-b266-987ee1fd65e7.
Soft actuators and robotic devices have been increasingly applied to the field of rehabilitation and assistance, where safe human and machine interaction is of particular importance. Compared with their widely used rigid counterparts, soft actuators and robotic devices can provide a range of significant advantages; these include safe interaction, a range of complex motions, ease of fabrication and resilience to a variety of environments. In recent decades, significant effort has been invested in the development of soft rehabilitation and assistive devices for improving a range of medical treatments and quality of life. This review provides an overview of the current state-of-the-art in soft actuators and robotic devices for rehabilitation and assistance, in particular systems that achieve actuation by pneumatic and hydraulic fluid-power, electrical motors, chemical reactions and soft active materials such as dielectric elastomers, shape memory alloys, magnetoactive elastomers, liquid crystal elastomers and piezoelectric materials. Current research on soft rehabilitation and assistive devices is in its infancy, and new device designs and control strategies for improved performance and safe human-machine interaction are identified as particularly untapped areas of research. Finally, insights into future research directions are outlined.Corresponding author(s) Email: firstname.lastname@example.org, email@example.com
With advancements in automation and high-throughput techniques, complex materials discovery with multiple conflicting objectives can now be tackled in experimental labs. Given that physical experimentation is greatly limited by evaluation budget, maximizing efficiency of optimization becomes crucial. We discuss the limitations of using hypervolume as a performance indicator for desired optimality across the entire multi-objective optimization run and propose new metrics specific to experimentation: ability to perform well for complex high-dimensional problems, minimizing wastage of evaluations, consistency/robustness of optimization, and ability to scale well to high throughputs. With these metrics, we perform a comparison of two conceptually different and state-of-the-art algorithms (Bayesian and Evolutionary) on synthetic and real-world datasets. We discuss the merits of both approaches with respect to exploration and exploitation, where fully resolving the Pareto Front could be the main aim for greater scientific value in understanding materials space, and thus provide a perspective for materials scientists to implement optimization in their platforms.
Excavation of regolith is the enabling process for many of the in-situ resource utilization (ISRU) efforts that are being considered to aid in the human exploration of the moon and Mars. Most proposed planetary excavation systems are integrated with a wheeled vehicle, but none yet have used a screw-propelled vehicle which can significantly enhance the excavation performance. Therefore, CASPER, a novel screw-propelled excavation rover is developed and analyzed to determine its effectiveness as a planetary excavator. The excavation rate, power, velocity, cost of transport, and a new parameter, excavation transport rate, are analyzed for various configurations of the vehicle through mobility and excavation tests performed in silica sand. The optimal configuration yielded a 30 kg/hr excavation rate and 10.2 m/min traverse rate with an overall system mass of 3.4 kg and power draw of less than 30 W. These results indicate that this architecture shows promise as a planetary excavation because it provides significant excavation capability with low mass and power requirements. Corresponding author(s) Email: firstname.lastname@example.org
The substantial increase in global population and climate change, among other factors have led to global food security and supply chain challenges. The United Nations has laid out an agenda to sustainably achieve zero hunger by 2030 as one of its sustainable development goals. However, sustainably achieving improved food yield has become a challenge as excessive use of fertilizers has also led to adverse environmental impact. To address the aforementioned challenges, WisDM Green, an artificial intelligence (AI)-based platform that aims to pinpoint and prioritize compound (e.g. biostimulants) combinations in peat moss, is harnessed to sustainably improve the yield of Amaranthus cruentus (red spinach). In this proof-of-concept study, from a pool of 8 compounds, WisDM Green-pinpointed combinations (6-Benzylaminopurine/Ethylenediaminetetraacetic Acid Iron (III) and Humic Acid/Seaweed Extract) achieve 26.34±15.80 and 33.59±14.60 increase in %Yield, respectively. The study also indicates that compound combinations may exhibit concentration-dependent synergies and thus, properly adjusting the concentration ratios of combinations may further improve plant yield in the context of sustainable farming. P. Wang and K. You contributed to this work equally.Corresponding author(s) Email: email@example.com, firstname.lastname@example.org, email@example.com
"AI & Drug Discovery" mode has significantly promoted drug development and achieved excellent performance, especially with the rapid development of deep learning, making remarkable contributions to protecting human physiological health. However, due to the "black-box" characteristic of the deep learning model, the decision route and predicted results in different research stages assisted by deep models are usually unexplainable, limiting their application in practice and more in-depth research of drug discovery. Focusing on the drug molecules, we propose an explainable fragment-based molecular property attribution technique for analyzing the influence of particular molecule fragments on properties and the relationship between the molecular properties in this paper. Quantitative experiments on 42 benchmark property tasks demonstrate that 325 attribution fragments, which account for 90% of the overall attribution results obtained by the proposed method, have positive relevance to the corresponding property tasks. More impressively, most of the attribution results randomly selected are consistent with the existing mechanism explanations. The discovery mentioned above provides a reference standard for assisting researchers in developing more specific and practical drug molecule studies, such as synthesizing molecular with the targeted property using a fragment obtained from the attribution method.Corresponding author(s) Email: firstname.lastname@example.org
Quantum chemistry must evolve if it wants to fully leverage the benefits of the internet age, where the world wide web offers a vast tapestry of tools that enable users to communicate and interact with complex data at the speed and convenience of a button press. The Open Chemistry project has developed an open source framework that offers an end-to-end solution for producing, sharing, and visualizing quantum chemical data interactively on the web using an array of modern tools and approaches. These tools build on some of the best open source community projects such as Jupyter for interactive online notebooks, coupled with 3D accelerated visualization, state-of-the-art computational chemistry codes including NWChem and Psi4 and emerging machine learning and data mining tools such as ChemML and ANI. They offer flexible formats to import and export data, along with approaches to compare computational and experimental data.
Scene flow tracks the three-dimensional (3D) motion of each point in adjacent point clouds. It provides fundamental 3D motion perception for autonomous driving and server robot. Although the Red Green Blue Depth (RGBD) camera or Light Detection and Ranging (LiDAR) capture discrete 3D points in space, the objects and motions usually are continuous in the macro world. That is, the objects keep themselves consistent as they flow from the current frame to the next frame. Based on this insight, the Generative Adversarial Networks (GAN) is utilized to self-learn 3D scene flow with no need for ground truth. The fake point cloud of the second frame is synthesized from the predicted scene flow and the point cloud of the first frame. The adversarial training of the generator and discriminator is realized through synthesizing indistinguishable fake point cloud and discriminating the real point cloud and the synthesized fake point cloud. The experiments on Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) scene flow dataset show that our method realizes promising results without ground truth. Just as human, the proposed method can identify the similar local structures of two adjacent frames even without knowing the ground truth scene flow. Then, the local correspondence can be correctly estimated, and further the scene flow is correctly estimated. Corresponding author(s) Email: email@example.com
The explosive growth of data and information has motivated technological developments in computing systems that utilize them for efficiently discovering patterns and gaining relevant insights. Inspired by the structure and functions of biological synapses and neurons in the brain, neural network algorithms that can realize highly parallel computations have been implemented on conventional silicon transistor-based hardware. However, synapses composed of multiple transistors allow only binary information to be stored, and processing such digital states through complicated silicon neuron circuits makes low-power and low-latency computing difficult. Therefore, the attractiveness of the emerging memories and switches for synaptic and neuronal elements, respectively, in implementing neuromorphic systems, which are suitable for performing energy-efficient cognitive functions and recognition, is discussed herein. Based on a literature survey, recent progress concerning memories shows that novel strategies related to materials and device engineering to mitigate challenges are presented to primarily achieve nonvolatile analog synaptic characteristics. Attempts to emulate the role of the neuron in various ways using compact switches and volatile memories are also discussed. It is hoped that this review will help direct future interdisciplinary research on device, circuit, and architecture levels of neuromorphic systems. Corresponding author(s) Email: firstname.lastname@example.org
Organic memristors are promising candidates for the flexible synaptic components of wearable intelligent systems. With heightened concerns for the environment, considerable effort has been made to develop organic transient memristors to realize eco-friendly flexible neural networks. However, in the transient neural networks, achieving flexible memristors with bio-realistic synaptic plasticity for energy efficient learning processes is still challenging. Here, we demonstrate a biodegradable and flexible polymer based memristor, suitable for the spike-dependent learning process. An electrochemical metallization phenomenon for the conductive nanofilament growth in a polymer medium of poly (vinyl alcohol) (PVA) is analyzed and a PVA based transient and flexible artificial synapse is developed. The developed device exhibits superior biodegradability and stable mechanical flexibility due to the high water solubility and excellent tensile strength of the PVA film, respectively. In addition, the developed flexible memristor is operated as a reliable synaptic device with optimized synaptic plasticity, which is ideal for artificial neural networks with the spike-dependent operations. The developed device is found to be effectively served as a reliable synaptic component with high energy efficiency in practical neural networks. This novel strategy for developing transient and flexible artificial synapses can be a fundamental platform for realizing eco-friendly wearable intelligent systems.Corresponding author(s) Email: email@example.com
The growing generation of data and their wide availability has led to the development of tools to produce, analyze and store this information. Computational chemistry studies and especially catalytic applications often yield a vast amount of chemical information that can be analyzed and stored using these tools. In this manuscript we present a framework that automatically performs a full automated procedure consisting in the transfer of an adsorbate from a known metal slab to a new metal slab with similar packing. Our method generates the new geometry and also performs the required calculations and analysis to finally upload the processed data to an online database (ioChem-BD). Two different implementations have been built, one to relocate minimum energy point structures and the second to transfer transition states. Our framework shows good performance for the minimum point location and a decent performance for the transition state identification. Most of the failures occurred during the transition state searches needed additional steps to fully complete the process. Further improvements of our framework are required to increase the performance of both implementations. These results point to the _avoidhuman_ path as a feasible solution for studies on very large systems that require a significant amount of human resources and in consequence are prone to human errors.
Single-use jumping robots that are mass-producible and biodegradable could be quickly released for environmental sensing applications. Such robots would be pre-loaded to perform a set number of jumps, in random directions and with random distances, removing the need for onboard energy and computation. Stochastic jumpers build on embodied randomness and large-scale deployments to perform useful work. This paper introduces simulation results showing how to construct a large group of stochastic jumpers to perform environmental sensing, and the first demonstration of robot prototypes that can perform a set number of sequential jumps, have full-body sensing, and are well suited to be made biodegradable. Corresponding author(s) Email: firstname.lastname@example.org email@example.com
Intelligence in its decisions is a trait that we have grown to expect from a cyber-physical system. In particular that it makes the right choices at runtime, i.e., those that allow it fulfill its tasks, even in case of faults or unexpected interactions with its environment. Analyzing how to continuously achieve the currently desired (and possibly continuously changing) goals and adapting its behavior to reach these goals is undoubtedly a serious challenge. This becomes even more challenging if the atomic actions a system can implement become unreliable due to faulty components or some exogenous event out of its control. In this paper, we propose a solution for the presented challenge. In particular, we show how to adopt a light-weight diagnosis concept to cope with such situations. The approach is based on rules coupled with means for rule selection that are based on previous information regarding the success or failure of rule executions. We furthermore present a Java-based framework of the light-weight diagnosis concept, and discuss the results obtained from an experimental evaluation considering several application scenarios. At the end, we present a qualitative comparison with other related approaches that should help the reader decide which approach works best for them.
IntroductionReal world Time on Treatment (rwToT), also known as real world time to treatment discontinuation (rwTTD), is defined as the length of time observed in real world data (as distinct from controlled clinical trials) from initiation of a medication to discontinuation of that medication1,2. The ending of the treatment can be caused by adverse events, deaths, switches of treatment and loss of follow up. Because time to treatment discontinuation can be readily obtained from electronic medical records, this effectiveness endpoint is convenient to evaluate the efficacy of a drug that is already approved for public use3. It is often used as a surrogate effectiveness endpoint, showing high correlation to progression-free survival and moderate-to-high correlation to overall survival4,5. As rwTTD is an important metric for drug effectiveness, it is routinely reported during the post-clinical trial phase2,4,6–9.Calculation of rwTTD in patient population is often equivalent to constructing a (Kaplan-Meier) KM curve, with each point representing the proportion of patients that are still on treatment at a specific time point 1. Either the entire curve, or mean rwTTD, restricted mean10, or the time point at which a specific portion of the patients (e.g. , 50%) dropping treatment is of interest. Currently, there is no existing machine learning scheme established to predict such a curve, or the midpoint, as the vast majority of the machine learning models have been focused on predicting individuals’ behavior rather than population-level behavior. Such a machine learning scheme, if established, has many meaningful clinical applications. For instance, given observed clinical parameters and outcomes in clinical trials, how do we derive expected time-to-treatment in the real-world? Given the rwTTD for a drug on one patient population, how can we predict the rwTTD when applying this drug to another population (e.g. , for a different disease)?This study establishes a machine learning framework to infer population-wise rwTTD. We showed that population-wise curve prediction differs substantially from aggregating all individuals’ results. Our framework models the population-wise curve and is generic to diverse base-learners for predicting rwTTD. We demonstrated the effectiveness of this framework based on both simulated data and real world Electronic medical records (EMR) data for pembrolizumab-treated cancer populations7,11,12. The study opens a new direction of modeling population-level rwTTD, which has great values for directing post-clinical stage drug administrations. This machine learning scheme will also have meaningful implications to population-based predictions for other problems, as machine learning algorithms have so far been focused on predictions for individual samples.