Here we provide a selection of academic journal templates for articles and papers which automatically format your manuscripts in the style required for submission to that journal. Thanks to the partnerships we're building within the publishing community, you can also now submit your paper directly to a number of journals and other editorial and review services via the publish menu in the editor.
This is the LaTeX template for BioMed Central journal article submissions.
The BioMed Central instructions for authors provides instructions and guidelines for preparing your manuscript for submission.
You can start writing your paper online with Overleaf - simply click the image or "Open as Template" to get started.
This template includes formatting instructions for authors using LaTeX who will be publishing a paper in an AAAI (Association for the Advancement of Artificial Intelligence) Press proceedings or technical report.
AAAI creates proceedings, working notes, and technical reports directly from electronic source furnished by the authors. To ensure that all papers in the publication have a uniform appearance, authors must adhere to the following instructions.
See http://www.aaai.org/Publications/Author/author.php for more information.
We all have a good reason to learn a new language; discovering our roots, passion for travel, academic purposes, pure interest etc. However most of us find it hard to become conversationally fluent in a new language while we use traditional resources for learning like textbooks and tutorials on the internet. In this paper we propose a novel approach to learn a new language. We aim to develop an intelligent browser extension, LanGauger, that will help users learn foreign languages. This application will allow users to look up words while they are browsing, by highlighting the text to be learned. The application will then provide a translation of the word, its pronunciation and its usage context in sentences. In addition, this intelligent tutor will also remember what words have been seen by the user, and quiz them on these words at appropriate times. While testing the recall of the user, this feature will also allow users to frequently think about the language and use it.
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.