Count Number of Zeros - Online Binary Tools

CLI & GUI v0.17.1.3 'Oxygen Orion' released!

This is the CLI & GUI v0.17.1.3 'Oxygen Orion' point release. This release predominantly features bug fixes and performance improvements. Users, however, are recommended to upgrade, as it includes mitigations for the issue where transactions occasionally fail.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 38a04a7bd00733e9d943edba3004e44730c0848fe5e8a4fca4cb29c12d1e6b2f monero-android-armv7-v0.17.1.3.tar.bz2 0e94f58572646992ee21f01d291211ed3608e8a46ecb6612b378a2188390dba0 monero-android-armv8-v0.17.1.3.tar.bz2 ae1a1b61d7b4a06690cb22a3389bae5122c8581d47f3a02d303473498f405a1a monero-freebsd-x64-v0.17.1.3.tar.bz2 57d6f9c25bd1dbc9d6b39fcfb13260b21c5594b4334e8ed3b8922108730ee2f0 monero-linux-armv7-v0.17.1.3.tar.bz2 a0419993fbc6a5ca11bcd2e825acef13e429824f4d8c7ba4ec73ac446d2af2fb monero-linux-armv8-v0.17.1.3.tar.bz2 cf3fb693339caed43a935c890d71ecab5b89c430e778dc5ef0c3173c94e5bf64 monero-linux-x64-v0.17.1.3.tar.bz2 d107384ff7b1f77ee4db93940dbfda24d6045bf59c43169bc81a0118e3986bfa monero-linux-x86-v0.17.1.3.tar.bz2 79557c8bee30b229bda90bb9ee494097d639d60948fc2ad87a029359b56b1b48 monero-mac-x64-v0.17.1.3.tar.bz2 3eee0d0e896fb426ef92a141a95e36cb33ca7d1e1db3c1d4cb7383994af43a59 monero-win-x64-v0.17.1.3.zip c9e9dde61b33adccd7e794eba8ba29d820817213b40a2571282309d25e64e88a monero-win-x86-v0.17.1.3.zip # ## GUI 15ad80b2abb18ac2521398c4dad9b8bfea2e6fc535cf4ebcc60d99b8042d4fb2 monero-gui-install-win-x64-v0.17.1.3.exe 3bed02f9db5b7b2fe4115a636fecf0c6ec9079dd4e9284c8ce2c67d4996e2a4a monero-gui-linux-x64-v0.17.1.3.tar.bz2 23405534c7973a8d6908b76121b81894dc853039c942d7527d254dfde0bd2e8f monero-gui-mac-x64-v0.17.1.3.dmg 0a49ccccb561445f3d7ec0087ddc83a8b76f424fb7d5e0d725222f3639375ec4 monero-gui-win-x64-v0.17.1.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl+oVkkACgkQ8K9NRioL 35Lmpw//Xs09T4917sbnRH/DW/ovpRyjF9dyN1ViuWQW91pJb+E3i9TY+wU3q85k LyTihDB5pV+3nYgKPL9TlLfaytJIQG0vYHykPWHVmYmvoIs9BLarGwaU3bjO0rh9 ST5GDMdvxmQ5Y1LTwVfKkmBJw26DAs0xAvjBX44oRQjjuUdH6JdLPsqa5Kb++NCM b453m5s8bT3Cw6w0eJB1FQEyQ5BoDrwYcFzzsS1ag/C4Ylq0l6CZfEambfOQvdUi 7D5Rywfhiz2t7cfn7LaoXb74KDA/B1bL+R1/KhCuFqxRTOQzq9IxRywh4VptAAMU UR7jFHFijOMoyggIbkD48JmAjlBnqIyQJt4D5gbHe+tSaSoKdgoTGBAmIvaCZIng jfn9pTNzIJbTptsQhhyZqQQIH87D8BctZfX7pREjJmMNGwN2jFxXqUNqYTso20E6 YLtC1mkZBBZ294xHqT1mQpfznc6uVJhhoJpta0eKxkr1ahrGvWBDGZeVhLswnBcq 9dafAkR14rdK1naiCsygb6hMvBqBohVu/bWuhycJcv6XRvlP7UHkR6R8+s6U4Tk2 zaJERQF+cHQpEak5aEJIvDlb/mxteGyvPkPyL7UmADEQh3C4nREwkDSdnitYnF+e HxJZkshoC98+YCkWUP4+JYOOT158jKao3u0laEOxVGOrPz1Nc64= =Ys4h -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear shortly with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x, v0.16.x.x, or v0.17.x.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.17.1.3, it will simply pick up where it left off.

Release notes (GUI)

Some highlights of this minor release are:
  • Android support (experimental)
  • Linux binary is now reproducible (experimental)
  • Simple mode: transaction reliability improvements
  • New transaction confirmation dialog
  • Wizard: minor design changes
  • Linux: high DPI support
  • Fix "can't connect to daemon" issue
  • Minor bug fixes
Some highlights of this major release are:
  • Support for CLSAG transaction format
  • Socks5 proxy support, automatically enabled on Tails
  • Simple mode transactions are sent trough local daemon, improved reliability
  • Portable mode, save wallets + config to "storage" folder
  • History page: improvements, incoming / outgoing labels
  • Transfer: new success dialog
  • CMake build system improvements
  • Windows cross compilation support using Docker
  • Various minor bug and UI fixes
Note that you can find a full change log here.

Release notes (CLI)

Some highlights of this minor release are:
  • Add support for I2P and Tor seed nodes (--tx-proxy)
  • Add --ban-list daemon option to ban a list of IP addresses
  • Switch to Dandelion++ fluff mode if no out connections for stem mode
  • Fix a bug with relay_tx
  • Fix a rare readline related crash
  • Use /16 filtering on IPv4-within-IPv6 addresses
  • Give all hosts the same chance of being picked for connecting
  • Minor bugfixes
Some highlights of this major release are:
  • Support for CLSAG transaction format
  • Deterministic unlock times
  • Enforce claiming maximum coinbase amount
  • Serialization format changes
  • Remove most usage of Boost library
  • Always send raw transactions through P2P, don't use bootstrap daemon
  • Update InProofV1, OutProofV1, and ReserveProofV1 to V2
  • ASM optimizations for wallet refresh (macOS / Linux)
  • Randomized delay when forwarding txes from i2p/tor -> ipv4/6
  • New show_qr_code wallet command for CLI
  • Add ZMQ/Pub support for txpool_add and chain_main events
  • Various bug fixes and performance improvements
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.7.4 of the Ledger Monero App is required in order to properly use CLI or GUI v0.17.1.3.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

./play.it 2.12: API, GUI and video games

./play.it 2.12: API, GUI and video games

./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.).
A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux
It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.

What’s new with 2.12?

Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;)
Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update!
The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:

Development migration

History

As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available.
Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:

Dedicated forge

As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge.
So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging.
That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit.
This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;)
To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.

Forge access

This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there.
So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.

API

The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it.
This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ».
Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks.
For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.

New website

Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki.
Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew.
We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available.
If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.

GUI

A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it.
Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-)
In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job !
Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK).
The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it).
To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used.
Of course, any suggestion for an improvement will be received with pleasure.

New games

Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.

What’s next?

Our team being inexhaustible, work on the future 2.13 version has already begun…
A few major objectives of this next version are :
If your desired features aren't on this list, don't hesitate to signal it us, in the comments of this news release. ;)

Links

submitted by vv224 to linux_gaming [link] [comments]

CLI & GUI v0.16.0.3 'Nitrogen Nebula' released!

This is the CLI & GUI v0.16.0.3 'Nitrogen Nebula' point release. This release predominantly features bug fixes and performance improvements.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.

Release notes (GUI)

  • macOS app is now notarized by Apple
  • CMake improvments
  • Add support for IPv6 remote nodes
  • Add command history to Logs page
  • Add "Donate to Monero" button
  • Indicate probability of finding a block on Mining page
  • Minor bug fixes
Note that you can find a full change log here.

Release notes (CLI)

  • DoS fixes
  • Add option to print daily coin emission and fees in monero-blockchain-stats
  • Minor bug fixes
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.6.0 of the Ledger Monero App is required in order to properly use CLI or GUI v0.16.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Securely generate 24-word Mnemonic using Dice

Disclaimer: This is for education purposes only. This is quite advanced for the average user. If you are to going to protect funds with a mnemonic generated using this method, use only a verified copy of tails and do all processes in a completely secure offline environment.
Let me begin by saying there is nothing to suggest that the RNG used in popular software/hardware wallets is flawed. The generation process uses TRNG's certified from 3rd parties which should satisfy the large majority of users. However, if you are the type that trusts no one and you want to verify your BIP39 Mnemonic is truly random or you just want to find out how it works, then you must generate it yourself.
The process itself is straightforward, The BIP39 dictionary contains 2048 words, each of these words represents 11 binary bits (0 or 1). A 24-word Mnemonic consists of 23 Words and a Checksum word. To create our own mnemonic we start by generating 256 bits of random binary then calculating the rest of the checksum. There is many ways to randomly generate the binary, but specifically for this tutorial we use six-sided dice. (If you have another means of generating the 256-bits, such as coin flips, then jump straight to step 10)
The entire process will be done only using tools built-in to tails. All base converisons will be done in the linux terminal using the 'bc' or Basic Calculator command. Calculating the checksum will use python standard library.
Tools needed:

Create BIP39 Mnemonic with Dice

Picture Album - https://imgur.com/a/sXTHr6c
submitted by Mcgillby to Bitcoin [link] [comments]

I can't install Percollate on Linux Mint 20 Ulyana. I know I could use wkhtmltopdf but I know from experience that Percollate is superior.

LONG POST WARNING - lots of code copied in the interest of clarity.

I tried running "npm i percollate" and after installing it gave:
npm WARN enoent ENOENT: no such file or directory, open '/home/name/package.json'
npm WARN name No description
npm WARN name No repository field.
npm WARN name No README data
npm WARN name No license field.
[email protected]:~$ npm i percollate
npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated [email protected]: request-promise-native has been deprecated because it extends the now deprecated request package, see https://github.com/request/request/issues/3142
npm WARN deprecated [email protected]: this library is no longer supported
- u/sindresorhus/is@0.14.0 node_modules/@sindresorhus/is
- [email protected] node_modules/ansi-regex
[email protected] node_modules/concat-stream/node_modules/safe-buffer -> node_modules/archiver-utils/node_modules/safe-buffer
string_[email protected] node_modules/concat-stream/node_modules/string_decoder -> node_modules/archiver-utils/node_modules/string_decoder
[email protected] node_modules/concat-stream/node_modules/readable-stream -> node_modules/archiver-utils/node_modules/readable-stream
- [email protected] node_modules/array-equal
- [email protected] node_modules/async-limiter
- [email protected] node_modules/buffer-from
- [email protected] node_modules/cacheable-request/node_modules/get-stream
- [email protected] node_modules/cacheable-request/node_modules/lowercase-keys
- [email protected] node_modules/cli-spinners
- [email protected] node_modules/clone
- [email protected] node_modules/color-name
- [email protected] node_modules/color-convert
- [email protected] node_modules/ansi-styles
- [email protected] node_modules/defaults
- [email protected] node_modules/defer-to-connect
- u/szmarczak/http-timer@1.1.2 node_modules/@szmarczak/http-timer
- [email protected] node_modules/duplexer3
- [email protected] node_modules/es6-promise
- [email protected] node_modules/es6-promisify
- [email protected] node_modules/escape-string-regexp
- [email protected] node_modules/fsevents
- [email protected] node_modules/get-stream
- [email protected] node_modules/has-flag
- [email protected] node_modules/http-cache-semantics
- [email protected] node_modules/json-buffer
- [email protected] node_modules/keyv
- [email protected] node_modules/lowercase-keys
- [email protected] node_modules/mimic-fn
- [email protected] node_modules/mimic-response
- [email protected] node_modules/clone-response
- [email protected] node_modules/decompress-response
- [email protected] node_modules/minimist
- [email protected] node_modules/mkdirp
- [email protected] node_modules/normalize-url
- [email protected] node_modules/nunjucks/node_modules/commander
- [email protected] node_modules/onetime
- [email protected] node_modules/os-tmpdir
- [email protected] node_modules/p-cancelable
- [email protected] node_modules/percollate/node_modules/agent-base
- [email protected] node_modules/percollate/node_modules/https-proxy-agent/node_modules/ms
- [email protected] node_modules/percollate/node_modules/https-proxy-agent/node_modules/debug
- [email protected] node_modules/percollate/node_modules/https-proxy-agent
- [email protected] node_modules/percollate/node_modules/ms
- [email protected] node_modules/percollate/node_modules/extract-zip/node_modules/debug
- [email protected] node_modules/percollate/node_modules/rimraf
- [email protected] node_modules/pn
- [email protected] node_modules/prepend-http
- [email protected] node_modules/resolve-url
- [email protected] node_modules/responselike
- [email protected] node_modules/cacheable-request
- [email protected] node_modules/signal-exit
- [email protected] node_modules/restore-cursor
- [email protected] node_modules/cli-cursor
- [email protected] node_modules/source-map-url
- [email protected] node_modules/strip-ansi
- [email protected] node_modules/supports-color
- [email protected] node_modules/chalk
- [email protected] node_modules/log-symbols
- [email protected] node_modules/to-readable-stream
- [email protected] node_modules/typedarray
- [email protected] node_modules/concat-stream
- [email protected] node_modules/percollate/node_modules/extract-zip
- [email protected] node_modules/urix
- [email protected] node_modules/url-parse-lax
- [email protected] node_modules/wcwidth
- [email protected] node_modules/got
- [email protected] node_modules/ora
- [email protected] node_modules/percollate/node_modules/puppeteer
/home/name
└─┬ [email protected]
├── u/mozilla/readability@0.3.0
├─┬ [email protected]
│ ├─┬ [email protected]
│ │ ├── [email protected]
│ │ ├─┬ [email protected]
│ │ │ └─┬ [email protected]
│ │ │ ├── [email protected]
│ │ │ └── string_[email protected]
│ │ ├── [email protected]
│ │ ├── [email protected]
│ │ ├── [email protected]
│ │ ├── [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └── [email protected]
├── UNMET PEER DEPENDENCY [email protected]^2.5.0
├─┬ [email protected]
│ └── [email protected]
├── [email protected]
├─┬ [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├─┬ domex[email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ └── [email protected]
│ ├── [email protected]
│ ├── [email protected]
│ ├─┬ [email protected]
│ │ ├── [email protected]
│ │ └── [email protected]
│ └── [email protected]
├── [email protected]
├── [email protected]
├─┬ [email protected]
│ └── [email protected]
├── [email protected]
├── [email protected]
└── [email protected]

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]~2.1.2 (node_modules/chokidanode_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN enoent ENOENT: no such file or directory, open '/home/name/package.json'
npm WARN [email protected] requires a peer of [email protected]^2.5.0 but none was installed.
npm WARN name No description
npm WARN name No repository field.
npm WARN name No README data
npm WARN name No license field.

I have learned that Puppeteer is deprecated [???] and that I should apparently be using something called Playwright. I am befuddled, because Percollate works swimmingly on Elementary OS. I use Elementary on the one device I have because my family and I like the ease of use, but it runs poorly on this other device. As stated previously I despise wkhtmltopdf because it doesn't format websites well at all. I want to be able to download articles and recipes and things to read in the evenings, printed out.

When I run "percollate --version" I get:
0.8.0

Percollate is installed but its dependencies seem to have all broken and / or been abandoned ... on this computer. Is this because Mint Ulyana is Ubuntu 20? Isn't Elementary OS Hera also Ubuntu 20? I'm so lost, I'm relatively new to Linux and still incapable of fixing things like this on my own. I can't write code at all so I wouldn't be able to go about fixing Percollate myself if it truly is broken.

Is there at the very least a way to makewkhtmltopdf format websites nicely without loads of errors?

Sorry for the long post but I wanted to be thorough. Oh yes, I also ran "PUPPETEER_PRODUCT=firefox npm i puppeteer" and that didn't fix anything. This is what happens when I try to run "percollate pdf https://winstonchurchill.org/resources/speeches/1940-the-finest-houwe-shall-fight-on-the-beaches/":

Fetching: https://winstonchurchill.org/resources/speeches/1940-the-finest-houwe-shall-fight-on-the-beaches/
Enhancing web page... ✓
(node:4596) UnhandledPromiseRejectionWarning: Error: Could not find browser revision 782078. Run "PUPPETEER_PRODUCT=firefox npm install" or "PUPPETEER_PRODUCT=firefox yarn install" to download a supported Firefox browser binary.
at ChromeLauncher.launch (/uslocal/lib/node_modules/percollate/node_modules/puppeteelib/cjs/puppeteenode/Launcher.js:86:23)
(node:4596) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:4596) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Does anyone know anything that would help me?
submitted by JonathanWillard to linuxquestions [link] [comments]

Having trouble deploying to netlify

Is there anyway I could get some help with why this will not deploy and is getting an error? I took out personal info obviously.
6:03:21 PM: Build ready to start 6:03:23 PM: build-image version: ca811f47d4c1cbd1812d1eb6ecb0c977e86d1a1d 6:03:23 PM: build-image tag: v3.3.20 6:03:23 PM: buildbot version: be8ecf2af866e16fa4301cc5c14de2ccbbb21cf4 6:03:23 PM: Fetching cached dependencies 6:03:23 PM: Starting to download cache of 254.8KB 6:03:23 PM: Finished downloading cache in 70.698045ms 6:03:23 PM: Starting to extract cache 6:03:23 PM: Failed to fetch cache, continuing with build 6:03:23 PM: Starting to prepare the repo for build 6:03:24 PM: No cached dependencies found. Cloning fresh repo 6:03:24 PM: git clone github user 6:03:25 PM: Preparing Git Reference refs/heads/master 6:03:26 PM: Different publish path detected, going to use the one specified in the Netlify configuration file: 'public' versus 'public/' in the Netlify UI 6:03:26 PM: Starting build script 6:03:26 PM: Installing dependencies 6:03:26 PM: Python version set to 2.7 6:03:27 PM: v12.18.0 is already installed. 6:03:28 PM: Now using node v12.18.0 (npm v6.14.4) 6:03:28 PM: Started restoring cached build plugins 6:03:28 PM: Finished restoring cached build plugins 6:03:28 PM: Attempting ruby version 2.7.1, read from environment 6:03:29 PM: Using ruby version 2.7.1 6:03:29 PM: Using PHP version 5.6 6:03:30 PM: 5.2 is already installed. 6:03:30 PM: Using Swift version 5.2 6:03:30 PM: Started restoring cached node modules 6:03:30 PM: Finished restoring cached node modules 6:03:30 PM: Installing NPM modules using NPM version 6.14.4 6:04:15 PM: > [email protected] install /opt/build/repo/node_modules/sharp 6:04:15 PM: > (node install/libvips && node install/dll-copy && prebuild-install) || (node-gyp rebuild && node install/dll-copy) 6:04:15 PM: info sharp Downloading https://github.com/lovell/sharp-libvips/releases/download/v8.8.1/libvips-8.8.1-linux-x64.tar.gz 6:04:18 PM: > [email protected] install /opt/build/repo/node_modules/node-sass 6:04:18 PM: > node scripts/install.js 6:04:18 PM: Downloading binary from https://github.com/sass/node-sass/releases/download/v4.14.1/linux-x64-72_binding.node 6:04:18 PM: Download complete 6:04:18 PM: Binary saved to /opt/build/repo/node_modules/node-sass/vendolinux-x64-72/binding.node 6:04:19 PM: Caching binary to /opt/buildhome/.npm/node-sass/4.14.1/linux-x64-72_binding.node 6:04:19 PM: > [email protected] postinstall /opt/build/repo/node_modules/@jimp/plugin-circle/node_modules/core-js 6:04:19 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:19 PM: > [email protected] postinstall /opt/build/repo/node_modules/@jimp/plugin-fisheye/node_modules/core-js 6:04:19 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:19 PM: > [email protected] postinstall /opt/build/repo/node_modules/@jimp/plugin-shadow/node_modules/core-js 6:04:19 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:19 PM: > [email protected] postinstall /opt/build/repo/node_modules/@jimp/plugin-threshold/node_modules/core-js 6:04:19 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:19 PM: > [email protected] postinstall /opt/build/repo/node_modules/core-js 6:04:19 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:19 PM: > [email protected] postinstall /opt/build/repo/node_modules/core-js-pure 6:04:19 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:20 PM: > [email protected] postinstall /opt/build/repo/node_modules/potrace/node_modules/core-js 6:04:20 PM: > node -e "try{require('./postinstall')}catch(e){}" 6:04:21 PM: > [email protected] postinstall /opt/build/repo/node_modules/gatsby-recipes/node_modules/gatsby-telemetry 6:04:21 PM: > node src/postinstall.js || true 6:04:21 PM: > [email protected] postinstall /opt/build/repo/node_modules/gatsby-telemetry 6:04:21 PM: > node src/postinstall.js || true 6:04:21 PM: > [email protected] postinstall /opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/node_modules/gatsby-telemetry 6:04:21 PM: > node src/postinstall.js || true 6:04:21 PM: > [email protected] postinstall /opt/build/repo/node_modules/cwebp-bin 6:04:21 PM: > node lib/install.js 6:04:22 PM: :heavy_check_mark: cwebp pre-build test passed successfully 6:04:22 PM: > [email protected] postinstall /opt/build/repo/node_modules/mozjpeg 6:04:22 PM: > node lib/install.js 6:04:22 PM: :heavy_check_mark: mozjpeg pre-build test passed successfully 6:04:22 PM: > [email protected] postinstall /opt/build/repo/node_modules/pngquant-bin 6:04:22 PM: > node lib/install.js 6:04:23 PM: :heavy_check_mark: pngquant pre-build test passed successfully 6:04:23 PM: > [email protected] postinstall /opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli 6:04:23 PM: > node scripts/postinstall.js 6:04:23 PM: > [email protected] postinstall /opt/build/repo/node_modules/gatsby 6:04:23 PM: > node scripts/postinstall.js 6:04:23 PM: > [email protected] postinstall /opt/build/repo/node_modules/node-sass 6:04:23 PM: > node scripts/build.js 6:04:23 PM: Binary found at /opt/build/repo/node_modules/node-sass/vendolinux-x64-72/binding.node 6:04:23 PM: Testing binary 6:04:23 PM: Binary is fine 6:04:26 PM: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/watchpack/node_modules/fsevents): 6:04:26 PM: npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) 6:04:26 PM: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents): 6:04:26 PM: npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) 6:04:26 PM: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/chokidanode_modules/fsevents): 6:04:26 PM: npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) 6:04:26 PM: added 2415 packages from 1245 contributors and audited 2489 packages in 55.244s 6:04:28 PM: 110 packages are looking for funding 6:04:28 PM: run npm fund for details 6:04:28 PM: found 18 low severity vulnerabilities 6:04:28 PM: run npm audit fix to fix them, or npm audit for details 6:04:28 PM: NPM modules installed 6:04:28 PM: Started restoring cached go cache 6:04:28 PM: Finished restoring cached go cache 6:04:28 PM: go version go1.14.4 linux/amd64 6:04:28 PM: go version go1.14.4 linux/amd64 6:04:28 PM: Installing missing commands 6:04:28 PM: Verify run directory 6:04:29 PM: 6:04:29 PM: ┌─────────────────────────────┐ 6:04:29 PM: │ Netlify Build │ 6:04:29 PM: └─────────────────────────────┘ 6:04:29 PM: 6:04:29 PM: ❯ Version 6:04:29 PM: @netlify/build 3.0.1 6:04:29 PM: 6:04:29 PM: ❯ Flags 6:04:29 PM: deployId: Id 6:04:29 PM: mode: buildbot 6:04:29 PM: 6:04:29 PM: ❯ Current directory 6:04:29 PM: /opt/build/repo 6:04:29 PM: 6:04:29 PM: ❯ Config file 6:04:29 PM: No config file was defined: using default values. 6:04:29 PM: 6:04:29 PM: ❯ Context 6:04:29 PM: production 6:04:29 PM: 6:04:29 PM: ┌───────────────────────────────────┐ 6:04:29 PM: │ 1. Build command from Netlify app │ 6:04:29 PM: └───────────────────────────────────┘ 6:04:29 PM: 6:04:29 PM: $ gatsby build 6:04:30 PM: /opt/build/repo/node_modules/yoga-layout-prebuilt/yoga-layout/build/Release/nbind.js:53 6:04:30 PM: throw ex; 6:04:30 PM: ^ 6:04:30 PM: Error: Cannot find module 'ink' 6:04:30 PM: Require stack: 6:04:30 PM: - /opt/build/repo/node_modules/ink-box/dist.js 6:04:30 PM: - /opt/build/repo/node_modules/gatsby-recipes/dist/cli.js 6:04:30 PM: - /opt/build/repo/node_modules/gatsby-recipes/dist/index.js 6:04:30 PM: - /opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/lib/recipes.js 6:04:30 PM: - /opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/lib/create-cli.js 6:04:30 PM: - /opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/lib/index.js 6:04:30 PM: - /opt/build/repo/node_modules/gatsby/dist/bin/gatsby.js 6:04:30 PM: at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15) 6:04:30 PM: at Function.Module._load (internal/modules/cjs/loader.js:842:27) 6:04:30 PM: at Module.require (internal/modules/cjs/loader.js:1026:19) 6:04:30 PM: at require (internal/modules/cjs/helpers.js:72:18) 6:04:30 PM: at Object. (/opt/build/repo/node_modules/ink-box/dist.js:5:12) 6:04:30 PM: at Module._compile (internal/modules/cjs/loader.js:1138:30) 6:04:30 PM: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10) 6:04:30 PM: at Module.load (internal/modules/cjs/loader.js:986:32) 6:04:30 PM: at Function.Module._load (internal/modules/cjs/loader.js:879:14) 6:04:30 PM: at Module.require (internal/modules/cjs/loader.js:1026:19) { 6:04:30 PM: code: 'MODULE_NOT_FOUND', 6:04:30 PM: requireStack: [ 6:04:30 PM: '/opt/build/repo/node_modules/ink-box/dist.js', 6:04:30 PM: '/opt/build/repo/node_modules/gatsby-recipes/dist/cli.js', 6:04:30 PM: '/opt/build/repo/node_modules/gatsby-recipes/dist/index.js', 6:04:30 PM: '/opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/lib/recipes.js', 6:04:30 PM: '/opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/lib/create-cli.js', 6:04:30 PM: '/opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli/lib/index.js', 6:04:30 PM: '/opt/build/repo/node_modules/gatsby/dist/bin/gatsby.js' 6:04:30 PM: ] 6:04:30 PM: } 6:04:30 PM: 6:04:30 PM: ┌─────────────────────────────┐ 6:04:30 PM: │ "build.command" failed │ 6:04:30 PM: └─────────────────────────────┘ 6:04:30 PM: 6:04:30 PM: Error message 6:04:30 PM: Command failed with exit code 7: gatsby build 6:04:30 PM: 6:04:30 PM: Error location 6:04:30 PM: In Build command from Netlify app: 6:04:30 PM: gatsby build 6:04:30 PM: 6:04:30 PM: Resolved config 6:04:30 PM: build: 6:04:30 PM: command: gatsby build 6:04:30 PM: commandOrigin: ui 6:04:30 PM: environment: 6:04:30 PM: - ADSENSE_ID 6:04:30 PM: - GATSBY_BUZZSPROUT_API_KEY 6:04:30 PM: - GATSBY_MAILCHIMP_AUDIENCE_ID 6:04:30 PM: - GATSBY_PODCAST_NUMBER 6:04:30 PM: - GATSBY_PROXY 6:04:30 PM: - NODE_ENV 6:04:30 PM: publish: /opt/build/repo/public 6:04:30 PM: Caching artifacts 6:04:30 PM: Started saving node modules 6:04:30 PM: Finished saving node modules 6:04:30 PM: Started saving build plugins 6:04:30 PM: Finished saving build plugins 6:04:30 PM: Started saving pip cache 6:04:30 PM: Finished saving pip cache 6:04:30 PM: Started saving emacs cask dependencies 6:04:30 PM: Finished saving emacs cask dependencies 6:04:30 PM: Started saving maven dependencies 6:04:31 PM: Finished saving maven dependencies 6:04:31 PM: Started saving boot dependencies 6:04:31 PM: Finished saving boot dependencies 6:04:31 PM: Started saving go dependencies 6:04:31 PM: Finished saving go dependencies 6:04:33 PM: Error running command: Build script returned non-zero exit code: 1 6:04:33 PM: Failing build: Failed to build site 6:04:33 PM: Failed during stage 'building site': Build script returned non-zero exit code: 1 6:04:33 PM: Finished processing build request in 1m10.018686427s
submitted by Blabbers01 to gatsbyjs [link] [comments]

Starting to collect postings under various topics related to the C language and C programming

Hi there,
I want to start to collect postings to C_Programming under various topics related to (you guessed it) C programming. You can, of course, help me get more postings and more topics. Some time in the future we can make this an entry to the wiki.
translation uwu - an uwu translator in C https://old.reddit.com/C_Programming/comments/e36f0v/uwu_an_uwu_translator_in_c/ How can i deliberately cause the undefined behavior of violating the translation limit of identifiers w/ c89 and c90? https://old.reddit.com/C_Programming/comments/doyaxa/how_can_i_deliberately_cause_the_undefined/ Is there a way of translating JavaScript code to C? https://old.reddit.com/C_Programming/comments/e7zq9j/is_there_a_way_of_translating_javascript_code_to_c/ How does C deal with various operating systems having different new line returns (e.g unix '\n', windows '\r\n', mac '\r')? https://old.reddit.com/C_Programming/comments/f668s0/how_does_c_deal_with_various_operating_systems/ Why having a header file for declaring structs and function prototypes and a .c file with the code of functions? https://old.reddit.com/C_Programming/comments/enq9zo/why_having_a_header_file_for_declaring_structs/ Why does the "static" keyword have 2 seemingly different meanings depending on context? https://old.reddit.com/C_Programming/comments/etc1wx/why_does_the_static_keyword_have_2_seemingly/ Question regarding 1.0e-5 https://old.reddit.com/C_Programming/comments/fedx8p/question_regarding_10e5/ How to approach big c code base https://old.reddit.com/C_Programming/comments/ewcg33/how_to_approach_big_c_code_base/ Create DNS answer C language using structs https://old.reddit.com/C_Programming/comments/ef8ykg/create_dns_answer_c_language_using_structs/ Hiding helper functions in C https://old.reddit.com/C_Programming/comments/ec6wm2/hiding_helper_functions_in_c/ Is it best practice to return EXIT_SUCCESS/EXIT_FAILURE instead of 0/1 ? https://old.reddit.com/C_Programming/comments/efs0vg/is_it_best_practice_to_return_exit_successexit/ Why Header-Only Libraries Are a Bad Idea https://old.reddit.com/C_Programming/comments/cakxnv/why_headeronly_libraries_are_a_bad_idea/ Switch statement to handle return messages https://old.reddit.com/C_Programming/comments/cgqyay/switch_statement_to_handle_return_messages/ File size impact of tabs vs. spaces in C code (Linux Kernel, specifically) https://old.reddit.com/C_Programming/comments/auv5mg/file_size_impact_of_tabs_vs_spaces_in_c_code/ How to pass Strings to a struct? https://old.reddit.com/C_Programming/comments/c636un/how_to_pass_strings_to_a_struct/ identifier Using universal character names for identifiers https://old.reddit.com/C_Programming/comments/eewy25/using_universal_character_names_for_identifiers/ Why is the struct keyword required on variable declarations in C? https://old.reddit.com/C_Programming/comments/eus7ii/why_is_the_struct_keyword_required_on_variable/ Should functions be prefixed with some sort of identifier in anything other than a small one file program? https://old.reddit.com/C_Programming/comments/9nd6t7/should_functions_be_prefixed_with_some_sort_of/ Variable declaration, definition and initialization difference? https://old.reddit.com/C_Programming/comments/eu8a7p/variable_declaration_definition_and/ Comparing to char '\0' vs implicit comparison to identify end of string? https://old.reddit.com/C_Programming/comments/dunyst/comparing_to_char_0_vs_implicit_comparison_to/ The way C Programers explain pointers https://old.reddit.com/C_Programming/comments/ek53ma/the_way_c_programers_explain_pointers/ When would you ever NOT use header guards? https://old.reddit.com/C_Programming/comments/exkhrf/when_would_you_ever_not_use_header_guards/ Is there a way of translating JavaScript code to C? https://old.reddit.com/C_Programming/comments/e7zq9j/is_there_a_way_of_translating_javascript_code_to_c/ Limited Length for Identifiers https://old.reddit.com/C_Programming/comments/4d1hwo/limited_length_for_identifiers/ Why do so many (modern) beginner resources recommend char* pt instead of char *pt? https://old.reddit.com/C_Programming/comments/cn6pnwhy_do_so_many_modern_beginner_resources/ Why convention is "void *identifier" instead of "void* identifier" for defining variable. https://old.reddit.com/C_Programming/comments/4ndk6a/why_convention_is_void_identifier_instead_of_void/ A question on buffer overruns https://old.reddit.com/C_Programming/comments/eq21ph/a_question_on_buffer_overruns/ Need help generating stubs for missing functions https://old.reddit.com/C_Programming/comments/ebhvln/need_help_generating_stubs_for_missing_functions/ Can someone explain this passage from Dennis Ritchie's paper about the design of C? https://old.reddit.com/C_Programming/comments/dmrti6/can_someone_explain_this_passage_from_dennis/ Rob Pike: Notes on Programming in C https://old.reddit.com/C_Programming/comments/bahs4v/rob_pike_notes_on_programming_in_c/ How should/do you place your * for pointers? https://old.reddit.com/C_Programming/comments/bjf85y/how_shoulddo_you_place_your_for_pointers/ scope I implemented 'defer' (cf go, zig, or d) https://old.reddit.com/C_Programming/comments/f4gtkt/i_implemented_defer_cf_go_zig_or_d/ Is there anything wrong with this file copy program? How should it be improved? https://old.reddit.com/C_Programming/comments/f8e62i/is_there_anything_wrong_with_this_file_copy/ Simple static variable output question https://old.reddit.com/C_Programming/comments/f8zy3e/simple_static_variable_output_question/ Confused about compile-time constants https://old.reddit.com/C_Programming/comments/fa9x9o/confused_about_compiletime_constants/ Why does the "static" keyword have 2 seemingly different meanings depending on context? https://old.reddit.com/C_Programming/comments/etc1wx/why_does_the_static_keyword_have_2_seemingly/ Possibility of creating a plugin for either gcc or clang to support anonymous function https://old.reddit.com/C_Programming/comments/eoqb28/possibility_of_creating_a_plugin_for_either_gcc/ Declaring counter variables outside a loop vs. inside a loop: which is better practice? https://old.reddit.com/C_Programming/comments/eq6it8/declaring_counter_variables_outside_a_loop_vs/ Zeroing arrays on initialization? https://old.reddit.com/C_Programming/comments/enzu80/zeroing_arrays_on_initialization/ Why for loop was added to C when there is already a while loop? https://old.reddit.com/C_Programming/comments/e0xhja/why_for_loop_was_added_to_c_when_there_is_already/ Noob question about scope and where in memory variables live https://old.reddit.com/C_Programming/comments/a77auj/noob_question_about_scope_and_where_in_memory/ Array declaration https://old.reddit.com/C_Programming/comments/e8nmhw/array_declaration/ Should a function return a pointer or write to a buffer? https://old.reddit.com/C_Programming/comments/e4o8i2/should_a_function_return_a_pointer_or_write_to_a/ n00b question about using structures to avoid passing huge numbers of parameters as function arguments https://old.reddit.com/C_Programming/comments/dn7hrn/n00b_question_about_using_structures_to_avoid/ lifetime Optional and name-based arguments in C https://old.reddit.com/C_Programming/comments/f49k9x/optional_and_namebased_arguments_in_c/ Is C++17 RVO/copy-elision a part of standard C? https://old.reddit.com/C_Programming/comments/f1gktm/is_c17_rvocopyelision_a_part_of_standard_c/ [Beginner ]Why does it form infinite loop when i don't declare the variable "num" as static ? https://old.reddit.com/C_Programming/comments/epxn4w/beginner_why_does_it_form_infinite_loop_when_i/ Difference between int * t and int t[] https://old.reddit.com/C_Programming/comments/e0g5fg/difference_between_int_t_and_int_t/ Addresses change after assignment, and char* doesn't print correctly? https://old.reddit.com/C_Programming/comments/cvnyzd/addresses_change_after_assignment_and_char_doesnt/ Milkstrings: Strings in C which are easy to use, but have limited lifetime and length. To preserve put them in the fridge. https://old.reddit.com/C_Programming/comments/4xdovs/milkstrings_strings_in_c_which_are_easy_to_use/ What's an "object" anyway https://old.reddit.com/C_Programming/comments/aliooy/whats_an_object_anyway/ Pointer invalidation rules for storage reuse https://old.reddit.com/C_Programming/comments/9qsd5l/pointer_invalidation_rules_for_storage_reuse/ Using Designated Initializer with Array of Values https://old.reddit.com/C_Programming/comments/7e6c37/using_designated_initializer_with_array_of_values/ Need help understanding localtime() https://old.reddit.com/C_Programming/comments/6r71o1/need_help_understanding_localtime/ Is it possible to maintain stack frames when leaving functions? https://old.reddit.com/C_Programming/comments/5va7tb/is_it_possible_to_maintain_stack_frames_when/ lookup Does C have anything like C++ constexpr? https://old.reddit.com/C_Programming/comments/ebu0sy/does_c_have_anything_like_c_constexp fast lookup hash table? https://old.reddit.com/C_Programming/comments/6yaxvs/fast_lookup_hash_table/ ELF: symbol lookup via DT_HASH https://old.reddit.com/C_Programming/comments/67c92g/elf_symbol_lookup_via_dt_hash/ When are binary trees efficient? https://old.reddit.com/C_Programming/comments/ba4om5/when_are_binary_trees_efficient/ Help with optimization https://old.reddit.com/C_Programming/comments/c283n8/help_with_optimization/ Create function pointer from string? https://old.reddit.com/C_Programming/comments/c11a5g/create_function_pointer_from_string/ How do you ensure the right constants are used in the right place? https://old.reddit.com/C_Programming/comments/agcdbj/how_do_you_ensure_the_right_constants_are_used_in/ Storing key/value pairs in C? https://old.reddit.com/C_Programming/comments/8jo5m4/storing_keyvalue_pairs_in_c/ A struct with an array of the struct in itself? https://old.reddit.com/C_Programming/comments/56lkej/a_struct_with_an_array_of_the_struct_in_itself/ Best Data Structure? https://old.reddit.com/C_Programming/comments/6d0mwz/best_data_structure/ How to deal with old references to a resized hash table? https://old.reddit.com/C_Programming/comments/73lrz2/how_to_deal_with_old_references_to_a_resized_hash/ How bad is the performance loss from 'generic' data structure lack of locality https://old.reddit.com/C_Programming/comments/5rodk8/how_bad_is_the_performance_loss_from_generic_data/ How to access array by value instead of index? https://old.reddit.com/C_Programming/comments/53lk6o/how_to_access_array_by_value_instead_of_index/ Most frequent element in integer array https://old.reddit.com/C_Programming/comments/4jlxlu/most_frequent_element_in_integer_array/ Hash Table for Embedded Systems? https://old.reddit.com/C_Programming/comments/5gtfb9/hash_table_for_embedded_systems/ name spaces In Python, we have a "Zen of Python". What would you say is the "Zen of C"? https://old.reddit.com/C_Programming/comments/8qn5ti/in_python_we_have_a_zen_of_python_what_would_you/ Project structuring and 'namespacing'. What are the common/best practices? https://old.reddit.com/C_Programming/comments/6r2w93/project_structuring_and_namespacing_what_are_the/ Tips for avoiding name clashing in C? https://old.reddit.com/C_Programming/comments/cc24wn/tips_for_avoiding_name_clashing_in_c/ Struct with function pointer in header file. https://old.reddit.com/C_Programming/comments/c095sm/struct_with_function_pointer_in_header_file/ Reason Quake's sources use reserved "_t" postfix? https://old.reddit.com/C_Programming/comments/9ffx60/reason_quakes_sources_use_reserved_t_postfix/ Best way to "hide" functions used by macros? https://old.reddit.com/C_Programming/comments/8ve5mk/best_way_to_hide_functions_used_by_macros/ How to prevent potential symbol conflicts with customer code? https://old.reddit.com/C_Programming/comments/2k2nq9/how_to_prevent_potential_symbol_conflicts_with/ Library (access) design tradeoffs? https://old.reddit.com/C_Programming/comments/4s6loy/library_access_design_tradeoffs/ When is it appropriate to typedef? https://old.reddit.com/C_Programming/comments/3y96s5/when_is_it_appropriate_to_typedef/ Should functions be prefixed with some sort of identifier in anything other than a small one file program? https://old.reddit.com/C_Programming/comments/9nd6t7/should_functions_be_prefixed_with_some_sort_of/ 
submitted by tinkerdaemon19 to C_Programming [link] [comments]

MAME 0.215

MAME 0.215

A wild MAME 0.215 appears! Yes, another month has gone by, and it’s time to check out what’s new. On the arcade side, Taito’s incredibly rare 4-screen top-down racer Super Dead Heat is now playable! Joining its ranks are other rarities, such as the European release of Capcom‘s 19XX: The War Against Destiny, and a bootleg of Jaleco’s P-47 – The Freedom Fighter using a different sound system. We’ve got three newly supported Game & Watch titles: Lion, Manhole, and Spitball Sparky, as well as the crystal screen version of Super Mario Bros. Two new JAKKS Pacific TV games, Capcom 3-in-1 and Disney Princesses, have also been added.
Other improvements include several more protection microcontrollers dumped and emulated, the NCR Decision Mate V working (now including hard disk controllers), graphics fixes for the 68k-based SNK and Alpha Denshi games, and some graphical updates to the Super A'Can driver.
We’ve updated bgfx, adding preliminary Vulkan support. There are some issues we’re aware of, so if you run into issues, check our GitHub issues page to see if it’s already known, and report it if it isn’t. We’ve also improved support for building and running on Linux systems without X11.
You can get the source and Windows binary packages from the download page.

MAMETesters Bugs Fixed

New working machines

New working clones

Machines promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

Software list items promoted to working

New NOT_WORKING software list additions

Source Changes

submitted by cuavas to emulation [link] [comments]

Moving and Copying Files and Directories in Linux - YouTube The Complete Linux Course: Beginner to Power User! - YouTube How To Make $1000 An Hour Trading Binary Options - New Method Binary.com volatility index strategy 2020 higher lower bot ... Earn Money in Online Without Investment in Binary Option Trading at Home in Tamil Best Trade Expiry Times for Binary Options Trading

Description. The general form of the command is cp source destination, for example:. cp myfile.txt myfilecopy.txt. Like many core Linux commands, if the cp command is successful, by default, no output is displayed. To view output when files are copied, use the -v (verbose) option.. By default, cp will overwrite files without asking. If the destination file name already exists, its data will be ... Name binary - Library for handling binary data Description. This module contains functions for manipulating byte-oriented binaries. Although the majority of functions could be implemented using bit-syntax, the functions in this library are highly optimized and are expected to either execute faster or consume less memory (or both) than a counterpart written in pure Erlang. Is it possible to copy a file in binary mode using terminal. That is a similar behavior to the dos-prompt command: copy /b file_a.ext1 + file_b.ext2 final_file.ext3</pre> where ext1 ext2 and ext3 are different file types. # dd if=/dev/zero of=/file bs=1024K count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 1.21755 s, 431 MB/s. The option count refers to the number of input blocks to be copied. Combined with block size value, it indicates the total size to copy. For example bs=1024k and count=500 give a size=1024K*500 =524288000 bytes ... Tag Description-A NUM, --after-context=NUM : Print NUM lines of trailing context after matching lines. Places a line containing --between contiguous groups of matches.-a, --text: Process a binary file as if it were text; this is equivalent to the --binary-files=text option.-B NUM, --before-context=NUM: Print NUM lines of leading context before matching lines. . Places a line containing ... Being a Linux user, copying files and directories is one of the most common day to day operations task.cp command is used to copy the files and directories from one local place to another using command line. cp command is available in almost all Unix and Linux like operating systems SCP Linux – Securely Copy Files Using SCP examples April 20, 2020 by Hayden James, in Blog Linux. This post includes SCP examples. SCP or secure copy allows secure transferring of files between a local host and a remote host or between two remote hosts. It uses the same authentication and security as the Secure Shell (SSH) protocol from which it is based. SCP is loved for it’s simplicity ... The hd or hexdump command in Linux is used to filter and display the specified files, or standard input in a human readable specified format. For example, if you want to view an executable code of a program, you can use hexdump to do so. The option --no-insert-timestamp can be used to insert a zero value for the timestamp, this ensuring that binaries produced from indentical sources will compare identically. The C6X uClinux target uses a binary format called DSBT to support shared libraries. Each shared library in the system needs to have a unique index; all executables use an index of 0. --dsbt-size size This option sets the ... Copy to clipboard Download ... In this example, we count the number of zeros of three binary values. Since the multi-line option is enabled, the number of unset bits in each number is printed next to it in the output. 100010110 01010000100 000000000000. 5 8 12. Required options. These options will be used automatically if you select this example. Multi-line Mode Individually find 0's in each ...

[index] [23580] [24475] [3743] [13087] [3210] [6050] [25680] [22553] [28888] [4980]

Moving and Copying Files and Directories in Linux - YouTube

In the first instance about binary options trading copy a pro trader cut it down to. binary options trading hours Forex magnates directory is option will have the brokers software developers ... Earn money from broker 20$ No deposit bonus no need to invest any money. All client registered using my link get 20$ No deposit bonus. Follow the step by step procedure and get your bonus then ... Get The Complete Linux Administration Course Bundle! https://josephdelgadillo.com/product/linux-course-bundle/ If you want to get started using Linux, you wi... Covers the mv and cp commands. Quitting binary option and pausing your trading with the markets basically because of recent failures? Well I was at the verge of giving up but on the other hand I understood that giving up wasn't ... Best Binary Options Autotrading Robot for 2016: Neo 2 Software? Copy Buffett? Binadroid? - Duration: 23:24. The Binary Lab 41,944 views. 23:24. Trading Price Action Patterns for the Stock Market ... https://www.binarybot.online/autobot Update 1.3.5.0 • New strategy Binary Touché • New strategy Hedge Barrier Breaker • Update in several strategies • Bug fi...

http://binaryoptiontrade.choisamotertongpu.ml