Votre recherche
Résultats 49 ressources
-
The Harvard Law School Library's Nuremberg Trials Project is an open-access initiative to create and present digitized images and full-text versions of the Library's Nuremberg Trials documents, descriptions of each document, and general information about the trials. The project provides access to the documentary record and other associated materials for all thirteen trials. We currently offer page images and searchable document descriptions for all documents as well as searchable full-text versions for most prosecution exhibits and most of the trial transcripts. Both prosecution and trial exhibits as well as all trial transcripts are currently only offered in the English-language versions used at the time of the Trials. Important: when citing from documents and transcripts, always verify the exact wording by consulting the accompanying page images of the document in question. The full-text versions of transcripts and documents are a product of human- and computer-aided transcription and may deviate in small ways from the originals.
-
Ingenium – Le Musée des sciences de l’innovation du Canada à Ottawa crée des expositions hors concours, des programmes accessibles et adaptés aux familles ainsi que des recherches collaboratives qui prennent vie grâce à notre collection nationale d’artefacts.
-
Recherche dans les listes nominatives de recensement de population des années 1926, 1931 et 1936 à Paris.
-
Accès aux dossiers nominatifs des personnes nommées ou promues dans l'Ordre de la Légion d'honneur depuis 1802 et décédées avant 1977. Les dossiers originaux sont conservés aux Archives nationales ou à la Grande Chancellerie de la Légion d'honneur.
-
Corpus Académie française. (s. d.). ORTOLANG. https://hdl.handle.net/11403/corpus-academie-francaise
Le Corpus Académie française est un corpus lemmatisé et étiqueté des textes publiés sur le site de l'Académie française dans les rubriques « Discours », « Dire, ne pas dire » et « Questions de langue. 1635 à nos jours.
-
Overview This code in the R programming language downloads and processes the full set of resolutions, drafts and meeting records rendered by the United Nations Security Council (UNSC), as published by the UN Digital Library, into a rich and structured human- and machine-readable dataset. It is the basis for the Corpus of Resolutions: UN Security Council (CR-UNSC). All data sets created with this script will always be hosted permanently open access and freely available at Zenodo, the scientific repository of CERN. Each version is uniquely identified with a persistent Digitial Object Identifier (DOI), the Version DOI. The newest version of the data set will always available via the link of the Concept DOI: https://doi.org/10.5281/zenodo.7319780 Updates The CR-UNSC will be updated at least once per year. In case of serious errors an update will be provided at the earliest opportunity and a highlighted advisory issued on the Zenodo page of the current version. Minor errors will be documented in the GitHub issue tracker and fixed with the next scheduled release. The CR-UNSC is versioned according to the day of the last run of the data pipeline, in the ISO format YYYY-MM-DD. Its initial release version is 2024-05-03. Notifications regarding new and updated data sets will be published on my academic website at www.seanfobbe.com or on the Fediverse at @seanfobbe@fediscience.org Changelog New variant: EN_TXT_BEST containing a write-out of the English resolution texts equivalent to the CSV file text variable New diagrams: bar charts of top M49 regions and sub-regions of countries mentioned in resolution texts Fixed naming mix-up of BIBTEX and GRAPHML zip archives Fixed whitespace character detection in citation extraction (adds ca. 10% more citations) Fixed improper merging of weights in citation network Fixed "cannot xtfrm data frames" warning Improve REGEX detection for certain geographic entities Improve Codebook (headings, citation network docs) Functionality The pipeline will produce the following results and store them in the output/ folder: Codebook as PDF Compilation Report as PDF Quality Assurance Report as PDF ZIP archive containing the main data set as a CSV file ZIP archive containing only the metadata of the main data set as a CSV file ZIP archive containing citation data and metadata as a GraphML file ZIP archive containing bibliographic data as a BIBTEX file ZIP archive containing all resolution texts as TXT files (OCR and extracted) ZIP archive containing all resolution texts as PDF files (original and English OCR) ZIP archive containing all draft texts as PDF files (original) ZIP archive containing all meeting record texts as PDF files (original) ZIP archive containing the full Source Code ZIP archive containing all intermediate pipeline results ("targets") The integrity and veracity of each ZIP archive is documented with cryptographically secure hash signatures (SHA2-256 and SHA3-512). Hashes are stored in a separate CSV file created during the data set compilation process. System Requirements The reference data sets were compiled on a Debian host system. Running the Docker config on an SELinux system like Fedora will require modifications of the Docker Compose config file. 40 GB space on hard drive Multi-core CPU recommended. We used 8 cores/16 threads to compile the reference data sets. Standard config will use all cores on a system. This can be fine-tuned in the config file. Given these requirements the runtime of the pipeline is approximately 40 hours. Instructions Step 1: Prepare Folder Copy the full source code to an empty folder, for example by executing: $ git clone https://github.com/seanfobbe/cr-unsc Always use a dedicated and empty (!) folder for compiling the data set. The scripts will automatically delete all PDF, TXT and many other file types in its working directory to ensure a clean run. Step 2: Create Docker Image The Dockerfile contains automated instructions to create a full operation system with all necessary dependencies. To create the image from the Dockerfile, please execute: $ bash docker-build-image.sh Step 3: Compile Dataset If you have previously compiled the data set, whether successfuly or not, you can delete all output and temporary files by executing: $ Rscript delete_all_data.R You can compile the full data set by executing: $ bash docker-run-project.sh Results The data set and all associated files are now saved in your working directory. GNU General Public License Version 3 Copyright (C) 2024 Seán Fobbe, Lorenzo Gasbarri and Niccolò Ridi This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public Licensealong with this program. If not, see https://www.gnu.org/licenses/ Author Websites Personal Website of Seán Fobbe Personal Website of Lorenzo Gasbarri Personal Website of Niccolò Ridi Contact Did you discover any errors? Do you have suggestions on how to improve the data set? You can either post these to the Issue Tracker on GitHub or contact Seán Fobbe via https://seanfobbe.com/contact/
-
Diving into CSR's launch of a clear, concise repository of deployed nuclear weapons across their history.
-
This list of open access, digitized diaries is designed to assist students and scholars interested in studying diaries.
-
France China Archives est une plateforme en libre accès qui inventorie les archives photographiques de la Chine conservées en France. Elle prend en compte les fonds privés et publics dont le contenu couvre la période allant des années 1840 à nos jours. Cette plateforme a été pensée pour fournir un espace numérique savant à l’étude de la variété des pratiques photographiques en Chine, tout en rendant les ressources en français accessibles à la communauté scientifique, aux étudiants et au grand public.
-
The Chinese Women’s Studies database is a unique resource for information about women’s social conditions, women’s movements and the rise of feminism in China. It is built upon a collection of card records donated to Cheng Yu Tung East Asian Library in the early 1980s by Dr. Bobby Siu, who was then a PhD student of Carleton University, Ottawa, Ontario. He collected the information and put it down on thousands of cards while he was conducting research on Chinese women. The records were all handwritten in either English or Chinese. Topics covered range from education and marriage to women's movement as well as revolutions and wars.
-
Funü zazhi is one of the most long-lived women’s magazines in China. It was published by Shanghai Commercial Press between1915-31.
-
Includes 214 titles, about 110,000 records of Women's magazines. Developed and maintained by the Institute of Modern History Institute, Academia Sinica. Free but need registration.
-
An open access data platform, developed by the Institute of of the Modern History at the Chinese Academy of Social Sciences, which provides free access to data (books, newspapers, archival documents, journals, images, audios, and videos) related to Sino-Japanese relations from 1731.
-
An open access database of a research project that examines four influential women's magazines published in Shanghai between 1904-1937. Provides research materials including the images of the original issues of the magazines
-
Chinese books have been part of the Bodleian Libraries’ collections since the Library’s foundation in 1602. The first known acquisition of a Chinese book dates to 1604. Our founder – Sir Thomas Bodley – was instrumental in starting this collection, even though he did not speak or understand Chinese. His handwriting appears in the 1604 book. The collection grew over the following four centuries and it continues to grow today. It is now one of the most significant Chinese rare book and manuscript collections outside China, containing the largest number of Chinese books that arrived in Europe in 17th century. As part of a ten-year project funded by Chung Hon Dak Foundation, we have digitised over 1,800 items, as well as worked with the local Chinese community to explore our collections in more detail.
-
CUBIQ (catalogue unifié des bibliothèques gouvernementales du Québec) est le catalogue collectif des bibliothèques du RIBG. Il permet de repérer sur Internet les publications disponibles dans les bibliothèques des ministères et organismes du gouvernement du Québec qui utilisent les services du RIBG pour gérer leurs opérations. Sa mise à jour est quotidienne. CUBIQ renferme une collection vaste et diversifiée de : 500 000 livres; 250 000 publications gouvernementales; 14 000 revues et journaux. CUBIQ permet de consulter à l’écran un nombre grandissant de publications. Actuellement, plus de 100 000 titres sont disponibles en ligne.
-
1933-...
Explorer
Lieux
- Asie (9)
- Canada (13)
- États-Unis (11)
-
Europe
(9)
- France (4)
- Royaume-Uni (2)
- International (7)
Sujets
- Arts & Littérature (1)
- Criminologie (1)
- Démographie et population (2)
- Droit et lois (2)
- Economie (3)
- Études noires (3)
- Genre et sexualité (1)
- Militaire et paix (4)
- Philosophie et Sciences (1)
- Politique (6)
- Religion (1)
- Sociologie & Travail (1)
Types
- Actualités (7)
- Artéfacts et culture matérielle (2)
- Audio-Video (3)
- Cartes (1)
- Données et statistiques (1)
- Images (2)
- Informatique (3)
- Livres (1)
- Travaux non-publiés (1)