We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The goal of this study was to evaluate the effects of thinning eucalyptus trees on yield and nutritive value of corn for silage and palisadegrass in a crop–livestock–forest integrated system and to evaluate the total aboveground biomass yield in systems with and without trees. Plant variables, as well as the incidence of photosynthetically active radiation (PAR) and soil moisture, were evaluated between October 2016 and March 2018 in São Carlos, Brazil, in a crop–livestock–forest and a crop–livestock system. In the crop–livestock–forest system, eucalyptus trees (Eucalyptus urograndis clone GG100) were planted in April 2011, in single rows, with 15 × 2 m spacing. In 2016, the trees were thinned, and the spacing was changed to 15 × 4 m. The treatments comprised measurements at 0.00, 3.75, 7.50, and 11.25 m from the trees of the North row in the integrated crop–livestock–forest (iCLF) system and integrated crop–livestock (iCL) system. Palisadegrass (Urochloa brizantha) was sown after harvesting the corn. Corn yields were similar between treatments, with an average of 13.6 Mg ha−1. Corn for silage presented a higher percentage of grain in total biomass in the crop–livestock–forest positions (41.4 and 42.1%) than in the crop–livestock system (35.6%). No differences in forage accumulation were observed. Crude protein content in corn for silage and palisadegrass was higher in the crop–livestock–forest treatments than in the crop–livestock system. Such results indicate that thinning was favorable to production in the crop–livestock–forest system. Total aboveground biomass yield was higher in the iCLF system, indicating better land use for this type of integrated system.
Studies have shown that daily exposure to different products, whether chemical or natural, can cause irreversible damage to women’s reproductive health. Therefore it is necessary to use tests that evaluate the safety and efficacy of these products. Most reproductive toxicology tests are performed in vivo. However, in recent years, various cell culture methods, including embryonic stem cells and tissues have been developed with the aim of reducing the use of animals in toxicological tests. This is a major advance in the area of toxicology, as these systems have the potential to become a widely used tool compared with in vivo tests routinely used in reproductive biology and toxicology. The present review describes and highlights data on in vitro culture processes used to evaluate reproductive toxicity as an alternative to traditional methods using in vivo tests.
The objectives were to develop an effective protocol for transfection of ovine secondary follicles and to assess the effect of attenuating aquaporin 3 (AQP3) using a small interfering RNA (siRNA-AQP3) on antrum formation and follicular growth in vitro. Various combinations of Lipofectamine® volumes (0.5, 0.75 or 1.0 µl), fluorescent oligonucleotide (BLOCK-iT ™) concentrations (3.18, 27.12 or 36.16 nM) and exposure times (12, 14, 16, 18 or 20 h) were tested. The BLOCK-iT™ was replaced by siRNA-AQP3 in the transfection complex. Ovine secondary follicles were isolated and cultured in vitro for 6 days using standard protocols. Follicles were transfected on day 0 or 3 or on both days (0 and 3) and then cultured for an additional 3 or 6 days. As revealed by the fluorescence signal, the Lipofectamine®/BLOCK-iT™ complex (0.75 µl + 27.12 nM by 12 h of incubation) crossed the basement membrane and granulosa cell and reached the oocytes. In general, the rate of intact follicles was higher and the rate of antrum formation was lower in transfected follicles compared with control follicles. In conclusion, ovine secondary follicles can be successfully transfected during in vitro culture, and siRNA-mediated attenuation of AQP3 gene reduced antrum formation of secondary follicles.
One of the main advantages of Prolog is its potential for the implicit exploitation of parallelism and, as a high-level language, Prolog is also often used as a means to explicitly control concurrent tasks. Tabling is a powerful implementation technique that overcomes some limitations of traditional Prolog systems in dealing with recursion and redundant sub-computations. Given these advantages, the question that arises is if tabling has also the potential for the exploitation of concurrency/parallelism. On one hand, tabling still exploits a search space as traditional Prolog but, on the other hand, the concurrent model of tabling is necessarily far more complex, since it also introduces concurrency on the access to the tables. In this paper, we summarize Yap's main contributions to concurrent tabled evaluation and we describe the design and implementation challenges of several alternative table space designs for implicit and explicit concurrent tabled evaluation that represent different trade-offs between concurrency and memory usage. We also motivate for the advantages of using fixed-size and lock-free data structures, elaborate on the key role that the engine's memory allocator plays on such environments, and discuss how Yap's mode-directed tabling support can be extended to concurrent evaluation. Finally, we present our future perspectives toward an efficient and novel concurrent framework which integrates both implicit and explicit concurrent tabled evaluation in a single Prolog engine.
Despite conservation discourses in Madagascar increasingly emphasizing the role of customary institutions for wildlife management, we know relatively little about their effectiveness. Here, we used semi-structured interviews with 54 adults in eight villages to investigate whether sacred caves and taboos offer conservation benefits for cave-dwelling bats in and around Tsimanampetsotsa National Park, south-west Madagascar. Although some caves were described as sites of spiritual significance for the local communities, most interviewees (c. 76%) did not recognize their present-day sacred status. Similarly, only 22% of the interviewees recognized taboos inhibiting bat hunting and consumption. Legal protection of bats and caves through protected areas was often more widely acknowledged than customary regulations, although up to 30% of the interviewees reported consumption of bats within their communities. Guano extraction was often tolerated in sacred caves in exchange for economic compensation. This may benefit bat conservation by creating incentives for bat protection, although extraction is often performed through destructive and exploitative practices with little benefit for local communities. In view of these results our study questions the extent to which sacred sites, taboos and protected areas offer protection for bats in Madagascar. These results support previous studies documenting the erosion of customary institutions in Madagascar, including the loss of the spiritual values underpinning sacred sites. Given that many Malagasy bats are cave-dwelling species and that most depend on the customary protection of these sites, it is important to obtain a better understanding of the complex interactions between spiritual practices, taboos and protected areas in sustaining bat diversity.
The use of palliative care (PC) screening criteria to trigger PC consultations may optimize the utilization of PC services, improve patient comfort, and reduce invasive and futile end-of-life care. The aim of the present study was to assess the criterion validity and inter-rater reliability of a PC screening tool for patients admitted to an emergency department intensive care unit (ED-ICU).
Method
Observational retrospective study evaluating PC screening criteria based on the presence of advanced diagnosis and the use of two “surprise questions” (traditional and modified). Patients were classified at ED-ICU admission in four categories according to the proposed algorithm.
Result
A total of 510 patients were included in the analysis. From these, 337 (66.1%) were category 1, 0 (0.0%) category 2, 63 (12.4%) category 3, and 110 (21.6%) category 4. Severity of illness (Simplified Acute Physiology Score III score and mechanical ventilation), mortality (ED-ICU and intrahospital), and PC-related measures (order for a PC consultation, time between admission and PC consultation, and transfer to a PC bed) were significantly different across groups, more evidently between categories 4 and 1. Category 3 patients presented similar outcomes to patients in category 1 for severity of illness and mortality. However, category 3 patients had a PC consultation ordered more frequently than did category 1 patients. The screening criteria were assessed by two independent raters (n = 100), and a substantial interrater reliability was found, with 80% of agreement and a kappa coefficient of 0.75 (95% confidence interval = 0.62, 0.88).
Significance of results
This study is the first step toward the implementation of a PC screening tool in the ED-ICU. The tool was able to discriminate three groups of patients within a spectrum of increasing severity of illness, risk of death, and PC needs, presenting substantial inter-rater reliability. Future research should investigate the implementation of these screening criteria into routine practice of an ED-ICU.
The recent series 5 of the Answer Set Programming (ASP) system clingo provides generic means to enhance basic ASP with theory reasoning capabilities. We instantiate this framework with different forms of linear constraints and elaborate upon its formal properties. Given this, we discuss the respective implementations, and present techniques for using these constraints in a reactive context. More precisely, we introduce extensions to clingo with difference and linear constraints over integers and reals, respectively, and realize them in complementary ways. Finally, we empirically evaluate the resulting clingo derivatives clingo[dl] and clingo[lp] on common language fragments and contrast them to related ASP systems.
This special issue of Theory and Practice of Logic Programming (TPLP) contains the regular papers accepted for presentation at the 33rd International Conference on Logic Programming (ICLP 2017), held in Melbourne, Australia from the 28th of August to the 1st of September, 2017. ICLP 2017 was colocated with the 23rd International Conference on Principles and Practice of Constraint Programming (CP 2017) and the 20th International Conference on Theory and Applications of Satisfiability Testing (SAT 2017). Since the first conference held in Marseille in 1982, ICLP has been the premier international event for presenting research in logic programming.
Hybrid MKNF knowledge bases have been considered one of the dominant approaches to combining open world ontology languages with closed world rule-based languages. Currently, the only known inference methods are based on the approach of guess-and-verify, while most modern SAT/ASP solvers are built under the DPLL architecture. The central impediment here is that it is not clear what constitutes a constraint propagator, a key component employed in any DPLL-based solver. In this paper, we address this problem by formulating the notion of unfounded sets for non-disjunctive hybrid MKNF knowledge bases, based on which we propose and study two new well-founded operators. We show that by employing a well-founded operator as a constraint propagator, a sound and complete DPLL search engine can be readily defined. We compare our approach with the operator based on the alternating fixpoint construction by Knorr et al. (2011. Artificial Intelligence 175, 9, 1528–1554) and show that, when applied to arbitrary partial partitions, the new well-founded operators not only propagate more truth values but also circumvent the non-converging behavior of the latter. In addition, we study the possibility of simplifying a given hybrid MKNF knowledge base by employing a well-founded operator and show that, out of the two operators proposed in this paper, the weaker one can be applied for this purpose and the stronger one cannot. These observations are useful in implementing a grounder for hybrid MKNF knowledge bases, which can be applied before the computation of MKNF models.
Among the myriad of desirable properties discussed in the context of forgetting in Answer Set Programming, strong persistence naturally captures its essence. Recently, it has been shown that it is not always possible to forget a set of atoms from a program while obeying this property, and a precise criterion regarding what can be forgotten has been presented, accompanied by a class of forgetting operators that return the correct result when forgetting is possible. However, it is an open question what to do when we have to forget a set of atoms, but cannot without violating this property. In this paper, we address this issue and investigate three natural alternatives to forget when forgetting without violating strong persistence is not possible, which turn out to correspond to the different possible relaxations of the characterization of strong persistence. Additionally, we discuss their preferable usage, shed light on the relation between forgetting and notions of relativized equivalence established earlier in the context of Answer Set Programming, and present a detailed study on their computational complexity.
This paper describes an approach to the methodology of answer set programming that can facilitate the design of encodings that are easy to understand and provably correct. Under this approach, after appending a rule or a small group of rules to the emerging program, we include a comment that states what has been “achieved” so far. This strategy allows us to set out our understanding of the design of the program by describing the roles of small parts of the program in a mathematically precise way.
LPMLN is a recent addition to probabilistic logic programming languages. Its main idea is to overcome the rigid nature of the stable model semantics by assigning a weight to each rule in a way similar to Markov Logic is defined. We present two implementations of LPMLN, lpmln2asp and lpmln2mln. System lpmln2asp translates LPMLN programs into the input language of answer set solver clingo, and using weak constraints and stable model enumeration, it can compute most probable stable models as well as exact conditional and marginal probabilities. System lpmln2mln translates LPMLN programs into the input language of Markov Logic solvers, such as alchemy, tuffy, and rockit, and allows for performing approximate probabilistic inference on LPMLN programs. We also demonstrate the usefulness of the LPMLN systems for computing other languages, such as ProbLog and Pearl's Causal Models, that are shown to be translatable into LPMLN.
Ontology-based query answering asks whether a Boolean conjunctive query is satisfied by all models of a logical theory consisting of a relational database paired with an ontology. The introduction of existential rules (i.e., Datalog rules extended with existential quantifiers in rule heads) as a means to specify the ontology gave birth to Datalog+/-, a framework that has received increasing attention in the last decade, with focus also on decidability and finite controllability to support effective reasoning. Five basic decidable fragments have been singled out: linear, weakly acyclic, guarded, sticky, and shy. Moreover, for all these fragments, except shy, the important property of finite controllability has been proved, ensuring that a query is satisfied by all models of the theory iff it is satisfied by all its finite models. In this paper, we complete the picture by demonstrating that finite controllability of ontology-based query answering holds also for shy ontologies, and it therefore applies to all basic decidable Datalog+/- classes. To make the demonstration, we devise a general technique to facilitate the process of (dis)proving finite controllability of an arbitrary ontological fragment.
Answer set programming (ASP) is a well-established declarative paradigm. One of the successes of ASP is the availability of efficient systems. State-of-the-art systems are based on the ground+solve approach. In some applications, this approach is infeasible because the grounding of one or a few constraints is expensive. In this paper, we systematically compare alternative strategies to avoid the instantiation of problematic constraints, which are based on custom extensions of the solver. Results on real and synthetic benchmarks highlight some strengths and weaknesses of the different strategies.
This paper introduces GLINTS, a graphical tool for exploring variant narrowing computations in Maude. The most recent version of Maude, version 2.7.1, provides quite sophisticated unification features, including order-sorted equational unification for convergent theories modulo axioms such as associativity, commutativity, and identity. This novel equational unification relies on built-in generation of the set of variants of a term t, i.e., the canonical form of tσ for a computed substitution σ. Variant generation relies on a novel narrowing strategy called folding variant narrowing that opens up new applications in formal reasoning, theorem proving, testing, protocol analysis, and model checking, especially when the theory satisfies the finite variant property, i.e., there is a finite number of most general variants for every term in the theory. However, variant narrowing computations can be extremely involved and are simply presented in text format by Maude, often being too heavy to be debugged or even understood. The GLINTS system provides support for (i) determining whether a given theory satisfies the finite variant property, (ii) thoroughly exploring variant narrowing computations, (iii) automatic checking of node embedding and closedness modulo axioms, and (iv) querying and inspecting selected parts of the variant trees.
We argue that turning a logic program into a set of completed definitions can be sometimes thought of as the “reverse engineering” process of generating a set of conditions that could serve as a specification for it. Accordingly, it may be useful to define completion for a large class of Answer Set Programming (ASP) programs and to automate the process of generating and simplifying completion formulas. Examining the output produced by this kind of software may help programmers to see more clearly what their program does, and to what degree its behavior conforms with their expectations. As a step toward this goal, we propose here a definition of program completion for a large class of programs in the input language of the ASP grounder gringo, and study its properties.
Management of chronic diseases, such as heart failure, is a major public health problem. A standard approach to managing chronic diseases by medical community is to have a committee of experts develop guidelines that all physicians should follow. Due to their complexity, these guidelines are difficult to implement and are adopted slowly by the medical community at large. We have developed a physician advisory system that codes the entire set of clinical practice guidelines for managing heart failure using answer set programming. In this paper, we show how abductive reasoning can be deployed to find missing symptoms and conditions that the patient must exhibit in order for a treatment prescribed by a physician to work effectively. Thus, if a physician does not make an appropriate recommendation or makes a non-adherent recommendation, our system will advise the physician about symptoms and conditions that must be in effect for that recommendation to apply. It is under consideration for acceptance in TPLP.
In inductive learning of a broad concept, an algorithm should be able to distinguish concept examples from exceptions and noisy data. An approach through recursively finding patterns in exceptions turns out to correspond to the problem of learning default theories. Default logic is what humans employ in common-sense reasoning. Therefore, learned default theories are better understood by humans. In this paper, we present new algorithms to learn default theories in the form of non-monotonic logic programs. Experiments reported in this paper show that our algorithms are a significant improvement over traditional approaches based on inductive logic programming. Under consideration for acceptance in TPLP.