Papers and citations are the currencies in which the physicists in my field express their scientific value. When I, or one of my collaborators, have an idea for a model or for a calculation, I hardly ever have another main result in mind. So after a couple of months and lots of reading, brainstorming, computing and eventually writing, we proudly present the result of our labor on arXiv.org, an open-source, very lightly edited platform. After this initial appearance we start considering submitting to a peer-reviewed journal.
What is peer review? Our carefully selected journal puts an editor on our piece, who carefully selects a colleague in the field that reviews it, and comments on its merit for publication. Usually this is a multi-step process: the reviewer suggests modifications to improve the article, but sometimes the article is rejected, if the scientific quality is not high or innovative enough. These feedback loops form an integral part of the Scientific Method and are relied upon, perhaps too strongly, as a self-checking bootstrap for the research field. A result surviving peer review is often equated with the result being legitimate.
Most people are under the impression that peer review is a long established practice in the sciences. In fact, it only started to become common practice in the later half of last century: Einstein himself, for instance, is thought to have been subjected to peer review only once, and thoroughly loathed the experience. Before the large growth in the number of active scientists, and before computers, journals used to have too few submissions to be very critical and relied solely on editorial decisions.
Whereas the situation of a lack of submissions to make critical decisions is not desirable, nowadays we have another shortage: time. Researchers are first- and- foremost assessed on the number of (well-cited) papers they have published themselves. Their performance as a reviewer, supervisor, outreach figure, or teacher is often deemed less important. In a fast-paced academic field in which changing jobs every two- or three years is the inescapable norm for many, prioritizing tasks is imperative for career advancement.
The consequences are visible. Though not very well documented in general, there are striking examples of fake articles and even randomly generated articles (!) that made publication in peer reviewed journals. Conversely, there are also very notable examples of significant scientific findings that almost escaped publication.
There is another force that seems to drive away from peer review. I, among many others, set up a daily email update with all the papers that have been submitted to the arXiv the day before, which may interest me. In a fast moving field, I want (and need) to know the newest results as soon as they are available, and not months later after peer review. Even when looking at older papers I usually choose the open-access arXiv as a single platform over multiple journals with pay-walls or institutional login.
Ultimately, most scientists understand that peer review is only part of a more elaborate structure of inspections: it is just one line in the cobweb of the Scientific Method. The most stringent check is that good results should be independently repeatable. This will be tested sooner or later for any significant piece of research, not usually by reviewers, but by scientists wanting to apply the results or techniques. Therefore peer review cannot solely be relied on to confirm a scientific result, but should be seen as the first step on a ladder of verifications.
 This has been the order for most of the projects I have been involved with; and some peer-reviewed journals strongly encourage you to submit to the arXiv first. However, there are some authors that still like to go through peer review first.
 The renowned journal Nature, for instance, did not introduce peer review formally until 1967.
 This austere culture is often described at “publish or perish”.