May 24, 2017

hackergotchi for Jonathan Dowland

Jonathan Dowland

yakking

I've written a guest post for the Yakking Blog — "A WadC successor in Haskell?. It's mainly on the topic of Haskell with WadC as a use-case for a thought experiment.

Yakking is a collaborative blog geared towards beginner software engineers that is put together by some friends of mine. I was talking to them about contributing a blog post on a completely different topic a while ago, but that has not come to fruition (there or anywhere, yet). When I wrote up the notes that formed the basis of this blog post, I realised it might be a good fit.

Take a look at some of their other posts, and if you find it interesting, subscribe!

24 May, 2017 01:07PM

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.14.1

Weblate 2.14.1 has been released today. It is bugfix release fixing possible migration issues, search results navigation and some minor security issues.

Full list of changes:

  • Fixed possible error when paginating search results.
  • Fixed migrations from older versions in some corner cases.
  • Fixed possible CSRF on project watch and unwatch.
  • The password reset no longer authenticates user.
  • Fixed possible captcha bypass on forgotten password.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

24 May, 2017 08:00AM

May 23, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.11: Loads of goodies

The elevent update in the 0.12.* series of Rcpp landed on CRAN yesterday following the initial upload on the weekend, and the Debian package and Windows binaries should follow as usual. The 0.12.11 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, and the 0.12.10.release in March --- making it the fifteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1026 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases follows on the heels of R's 3.4.0 release and addresses on or two issues from the transition, along with a literal boatload of other fixes and enhancements. James "coatless" Balamuta was once restless in making the documentation better, Kirill Mueller addressed a number of more obscure compiler warnings (triggered under under -Wextra and the like), Jim Hester improved excecption handling, and much more mostly by the Rcpp Core team. All changes are listed below in some detail.

One big change that JJ made is that Rcpp Attributes also generate the now-almost-required package registration. (For background, I blogged about this one, two, three times.) We tested this, and do not expect it to throw curveballs. If you have an existing src/init.c, or if you do not have registration set in your NAMESPACE. It should cover most cases. But one never knows, and one first post-release buglet related to how devtools tests things has already been fixed in this PR by JJ.

Changes in Rcpp version 0.12.11 (2017-05-20)

  • Changes in Rcpp API:

    • Rcpp::exceptions can now be constructed without a call stack (Jim Hester in #663 addressing #664).

    • Somewhat spurious compiler messages under very verbose settings are now suppressed (Kirill Mueller in #670, #671, #672, #687, #688, #691).

    • Refreshed the included tinyformat template library (James Balamuta in #674 addressing #673).

    • Added printf-like syntax support for exception classes and variadic templating for Rcpp::stop and Rcpp::warning (James Balamuta in #676).

    • Exception messages have been rewritten to provide additional information. (James Balamuta in #676 and #677 addressing #184).

    • One more instance of Rf_mkString is protected from garbage collection (Dirk in #686 addressing #685).

    • Two exception specification that are no longer tolerated by g++-7.1 or later were removed (Dirk in #690 addressing #689)

  • Changes in Rcpp Documentation:

  • Changes in Rcpp Sugar:

    • Added sugar function trimws (Nathan Russell in #680 addressing #679).
  • Changes in Rcpp Attributes:

    • Automatically generate native routine registrations (JJ in #694)

    • The plugins for C++11, C++14, C++17 now set the values R 3.4.0 or later expects; a plugin for C++98 was added (Dirk in #684 addressing #683).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function now creates a package registration file provided R 3.4.0 or later is used (Dirk in #692)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 May, 2017 07:59PM

Reproducible builds folks

Reproducible Builds: week 108 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 14 and Saturday May 20 2017:

News and Media coverage

  • We've reached 94.0% reproducible packages on testing/amd64! (NB. without build path variation)
  • Maria Glukhova was interviewed on It's FOSS about her involvement with Reproducible Builds with respect to Outreachy.

IRC meeting

Our next IRC meeting has been scheduled for Thursday June 1 at 16:00 UTC.

Packages reviewed and fixed, bugs filed, etc.

Bernhard M. Wiedemann:

Chris Lamb:

Reviews of unreproducible packages

35 package reviews have been added, 28 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

diffoscope development

strip-nondeterminism development

tests.reproducible-builds.org

Holger wrote a new systemd-based scheduling system replacing 162 constantly running Jenkins jobs which were slowing down job execution in general:

  • Nothing fancy really, just 370 lines of shell code in two scripts, out of these 370 lines 80 are comments and 162 are node defitions for those 162 "jobs".
  • Worker logs not yet as good as with Jenkins but usually we dont need realitime log viewing of specific builds. Or rather, its a waste of time to do it. (Actual package build logs remain unchanged.)
  • Builds are a lot faster for the fast archs, but not so much difference on armhf.
  • Since April 12 for i386 (and a week later for the rest), the images below are ordered with i386 on top, then amd64, armhf and arm64. Except for armhf it's pretty visible when the switch was made.

Misc.

This week's edition was written by Chris Lamb, Holver Levsen, Bernhard M. Wiedemann, Vagrant Cascadian and Maria Glukhova & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

23 May, 2017 06:43PM

Tianon Gravi

Debuerreotype

Following in the footsteps of one of my favorite Debian Developers, Chris Lamb / lamby (who is quite prolific in the reproducible builds effort within Debian), I’ve started a new project based on snapshot.debian.org (time-based snapshots of the Debian archive) and some of lamby’s work for creating reproducible Debian (debootstrap) rootfs tarballs.

The project is named “Debuerreotype” as an homage to the photography roots of the word “snapshot” and the daguerreotype process which was an early method of taking photographs. The essential goal is to create “photographs” of a minimal Debian rootfs, so the name seemed appropriate (even if it’s a bit on the “mouthful” side).

The end-goal is to create and release Debian rootfs tarballs for a given point-in-time (especially for use in Docker) which should be fully reproducible, and thus improve confidence in the provenance of the Debian Docker base images.

For more information about reproducibility and why it matters, see reproducible-builds.org, which has more thorough explanations of the why and how and links to other important work such as the reproducible builds effort in Debian (for Debian package builds).

In order to verify that the tool actually works as intended, I ran builds against seven explicit architectures (amd64, arm64, armel, armhf, i386, ppc64el, s390x) and eight explicit suites (oldstable, stable, testing, unstable, wheezy, jessie, stretch, sid).

I used a timestamp value of 2017-05-16T00:00:00Z, and skipped combinations that don’t exist (such as wheezy on arm64) or aren’t supported anymore (such as wheezy on s390x). I ran the scripts repeatedly over several days, using diffoscope to compare the results.

While doing said testing, I ran across #857803, and added a workaround. There’s also a minor outstanding issue with wheezy’s reproducibility that I haven’t had a chance to dig deep very deeply into yet (but it’s pretty benign and Wheezy’s LTS support window ends 2018-05-31, so I’m not too stressed about it).

I’ve also packaged the tool for Debian, and submitted it into the NEW queue, so hopefully the FTP Masters will look favorably upon this being a tool that’s available to install from the Debian archive as well. 😇

Anyhow, please give it a try, have fun, and as always, report bugs!

23 May, 2017 06:00AM by Tianon Gravi (admwiggin@gmail.com)

May 22, 2017

hackergotchi for Gunnar Wolf

Gunnar Wolf

Open Source Symposium 2017

I travelled (for three days only!) to Argentina, to be a part of the Open Source Symposium 2017, a co-located event of the International Conference on Software Engineering.

This is, all in all, an interesting although small conference — We are around 30 people in the room. This is a quite unusual conference for me, as this is among the first "formal" academic conference I am part of. Sessions have so far been quite interesting.
What am I linking to from this image? Of course, the proceedings! They managed to publish the proceedings via the "formal" academic channels (a nice hard-cover Springer volume) under an Open Access license (which is sadly not usual, and is unbelievably expensive). So, you can download the full proceedings, or article by article, in EPUB or in PDF...
...Which is very very nice :)
Previous editions of this symposium have also their respective proceedings available, but AFAICT they have not been downloadable.
So, get the book; it provides very interesant and original insights into our community seen from several quite novel angles!

AttachmentSize
oss2017_cover.png84.47 KB

22 May, 2017 05:21PM by gwolf

hackergotchi for Michal Čihař

Michal Čihař

HackerOne experience with Weblate

Weblate has started to use HackerOne Community Edition some time ago and I think it's good to share my experience with that. Do you have open source project and want to get more attention of security community? This post will answer how it looks from perspective of pretty small project.

I've applied with Weblate to HackerOne Community Edition by end of March and it was approved early in April. Based on their recommendations I've started in invite only mode, but that really didn't bring much attention (exactly none reports), so I've decided to go public.

I've asked for making the project public just after coming from two weeks vacation, while expecting the approval to take some time where I'll settle down things which have popped up during vacation. In the end that was approved within single day, so I was immediately under fire of incoming reports:

Reports on HackerOne

I was surprised that they didn't lie - you will really get huge amount of issues just after making your project public. Most of them were quite simple and repeating (as you can see from number of duplicates), but it really provided valuable input.

Even more surprisingly there was second peak coming in when I've started to disclose resolved issues (once Weblate 2.14 has been released).

Overall the issues could be divided to few groups:

  • Server configuration such as lack of Content-Security-Policy headers. This is certainly good security practice and we really didn't follow it in all cases. The situation should be way better now.
  • Lack or rate limiting in Weblate. We really didn't try to do that and many reporters (correctly) shown that this is something what should be addressed in important entry points such as authentication. Weblate 2.14 has brought lot of features in this area.
  • Not using https where applicable. Yes, some APIs or web sites did not support https in past, but now they do and I didn't notice.
  • Several pages were vulnerable to CSRF as they were using GET while POST with CSRF protection would be more appropriate.
  • Lack of password strength validation. I've incorporated Django password validation to Weblate hopefully avoiding the weakest passwords.
  • Several issues in authentication using Python Social Auth. I've never really looked at how the authentication works there and there are some questionable decisions or bugs. Some of the bugs were already addressed in current releases, but there are still some to solve.

In the end it was really challenging week to be able to cope with the incoming reports, but I think I've managed it quite well. The HackerOne metrics states that there are 2 hours in average to respond on incoming incidents, what I think will not work in the long term :-).

Anyway thanks to this, you can now enjoy Weblate 2.14 which more secure than any release before, if you have not yet upgraded, you might consider doing that now or look into our support offering for self hosted Weblate.

The downside of this all was that the initial publishing on HackerOne made our website target of lot of automated tools and the web server was not really ready for that. I'm really sorry to all Hosted Weblate users who were affected by this. This has been also addressed now, but the infrastructure really should have been prepared before on this. To share how it looked like, here is number of requests to the nginx server:

nxing requests

I'm really glad I could make Weblate available on HackerOne as it will clearly improve it's security and security of hosted offering we have. I will certainly consider providing swag and/or bounties on further severe reports, but that won't be possible without enough funding for Weblate.

Filed under: Debian English SUSE Weblate

22 May, 2017 10:00AM

May 21, 2017

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

apt-offline 1.8.0 releasedI

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users.

Release is availabe from Github and Alioth

 

 

 

What is apt-offline ?

Description: offline APT package manager
apt-offline is an Offline APT Package Manager.
.
apt-offline can fully update and upgrade an APT based distribution without
connecting to the network, all of it transparent to APT.
.
apt-offline can be used to generate a signature on a machine (with no network).
This signature contains all download information required for the APT database
system. This signature file can be used on another machine connected to the
internet (which need not be a Debian box and can even be running windows) to
download the updates.
The downloaded data will contain all updates in a format understood by APT and
this data can be used by apt-offline to update the non-networked machine.
.
apt-offline can also fetch bug reports and make them available offline.

Categories: 

Keywords: 

Like: 

21 May, 2017 09:17PM by Ritesh Raj Sarraf

hackergotchi for Holger Levsen

Holger Levsen

20170521-this-time-of-the-year

It's this time of the year again…

So it seems summer has finally arrived here and for the first time this year I've been offline for more than 24h, even despite having wireless network coverage. The lake, the people, the bonfire, the music, the mosquitos and the fireworks at 3.30 in the morning were totally worth it! ;-)

21 May, 2017 06:26PM

Russ Allbery

Review: Sector General

Review: Sector General, by James White

Series: Sector General #5
Publisher: Orb
Copyright: 1983
Printing: 2002
ISBN: 0-312-87770-6
Format: Trade paperback
Pages: 187

Sector General is the fifth book (or, probably more accurately, collection) in the Sector General series. I blame the original publishers for the confusion. The publication information is for the Alien Emergencies omnibus, which includes the fourth through the sixth books in the series.

Looking back on my previous reviews of this series (wow, it's been eight years since I read the last one?), I see I was reviewing them as novels rather than as short story collections. In retrospect, that was a mistake, since they're composed of clearly stand-alone stories with a very loose arc. I'm not going to go back and re-read the earlier collections to give them proper per-story reviews, but may as well do this properly here.

Overall, this collection is more of the same, so if that's what you want, there won't be any negative surprises. It's another four engineer-with-a-wrench stories about biological and medical puzzles, with only a tiny bit of characterization and little hint to any personal life for any of the characters outside of the job. Some stories are forgettable, but White does create some memorable aliens. Sadly, the stories don't take us to the point of real communication, so those aliens stop at biological puzzles and guesswork. "Combined Operation" is probably the best, although "Accident" is the most philosophical and an interesting look at the founding principle of Sector General.

"Accident": MacEwan and Grawlya-Ki are human and alien brought together by a tragic war, and forever linked by a rather bizarre war monument. (It's a very neat SF concept, although the implications and undiscussed consequences don't bear thinking about too deeply.) The result of that war was a general recognition that such things should not be allowed to happen again, and it brought about a new, deep commitment to inter-species tolerance and politeness. Which is, in a rather fascinating philosophical twist, exactly what MacEwan and Grawlya-Ki are fighting against: not the lack of aggression, which they completely agree with, but with the layers of politeness that result in every species treating all others as if they were eggshells. Their conviction is that this cannot create a lasting peace.

This insight is one of the most profound bits I've read in the Sector General novels and supports quite a lot of philosophical debate. (Sadly, there isn't a lot of that in the story itself.) The backdrop against which it plays out is an accidental crash in a spaceport facility, creating a dangerous and potentially deadly environment for a variety of aliens. Given the collection in which this is included and the philosophical bent described above, you can probably guess where this goes, although I'll leave it unspoiled if you can't. It's an idea that could have been presented with more subtlety, but it's a really great piece of setting background that makes the whole series snap into focus. A much better story in context than its surface plot. (7)

"Survivor": The hospital ship Rhabwar rescues a sole survivor from the wreck of an alien ship caused by incomplete safeguards on hyperdrive generators. The alien is very badly injured and unconscious and needs the full attention of Sector General, but on the way back, the empath Prilicla also begins suffering from empathic hypersensitivity. Conway, the protagonist of most of this series, devotes most of his attention to that problem, having delivered the rescued alien to competent surgical hands. But it will surprise no regular reader that the problems turn out to be linked (making it a bit improbable that it takes the doctors so long to figure that out). A very typical entry in the series. (6)

"Investigation": Another very typical entry, although this time the crashed spaceship is on a planet. The scattered, unconscious bodies of the survivors, plus signs of starvation and recent amputation on all of them, convinces the military (well, police is probably more accurate) escort that this is may be a crime scene. The doctors are unconvinced, but cautious, and local sand storms and mobile vegetation add to the threat. I thought this alien design was a bit less interesting (and a lot creepier). (6)

"Combined Operation": The best (and longest) story of this collection. Another crashed alien spacecraft, but this time it's huge, large enough (and, as they quickly realize, of a design) to indicate a space station rather than a ship, except that it's in the middle of nowhere and each segment contains a giant alien worm creature. Here, piecing together the biology and the nature of the vehicle is only the beginning; the conclusion points to an even larger problem, one that requires drawing on rather significant resources to solve. (On a deadline, of course, to add some drama.) This story requires the doctors to go unusually deep into the biology and extrapolated culture of the alien they're attempting to rescue, which made it more intellectually satisfying for me. (7)

Followed by Star Healer.

Rating: 6 out of 10

21 May, 2017 05:21PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

In this blog post, I’ve described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images.

If you’re not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to “Birth of: containerized-wordpress-project (TL;DR)” section.

Migrating WordPress blog to AWS (EC2, Lightsail?)

Since I’ve been sold on Amazon’s AWS idea of cloud computing “services” for couple of years now. I’ve wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out.

Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill.

When AWS Lightsail was first released, it seemed that was an answer to all my problems.

But it wasn’t, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region.

Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential.

Its regional limitations were also good in a sense that they made me realize one important thing. And that’s once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2’s and different regions once they were available.

If done properly, it meant I could even have it moved across different clouds (I’m talking to you Google Cloud).

P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless.

Fundamental problem of every migration … is migration

Phase 1: Don’t reinvent the wheel?

When you have a WordPress site that’s not self hosted. You want everything to work, but yet you really don’t want to spend any time managing infrastructure it’s on.

And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great!

But when I took a look, although everything was running out of box, I wasn’t happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck.

With this setup, when it was time to upgrade (and it’s already that time) you wouldn’t be thinking about upgrade. You’d only be thinking about another migration.

Phase 2: What if I built everything myself?

Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big.

Being a huge Ansible fan, automating this step was a natural next step.

I even found an awesome Ansible role which seemed like it’s going to do everything I need. Except, I realized I needed to update all software that’s deployed with it, and customize it since configuration it was deployed on wasn’t as generic.

So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update.

Phase 3: Marriage made in heaven: Ansible + Docker + AWS

Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear.

Use Ansible to configure and setup host ready for Docker ecosystem. Ecosystem consisting of multiple separate containers for each required service (WordPress + Nginx + MariaDB). Link them all together as a single service using Docker Compose.

Idea was backed by thought to spend minimum to no time (and effort) on manual configuration of anything on the server. Level of attachment to this server was so low that I didn’t even want to SSH to it.

If there was something wrong, I could just nuke the whole thing and deploy code on a new healthy rolled out server with everything working out of box.

After it was clear what needed to be done, I got to work.

Birth of: containerized-wordpress-project (TL;DR)

After a lot of work, end result is project which allows you to automagically deploy & run containerized WordPress instance which consists of 3 separate containers running:

  • WordPress (PHP7 FPM)
  • Nginx
  • MariaDB

Once run, containerized-wordpress playbook will guide you through interactive setup of all 3 containers, after which it will run all  Ansible roles created for this project. End result is that host you have never even SSH-ed to will be fully configured and running containerized WordPress instace out of box.

Most importantly, this whole process will be completed in <= 5 minutes and doesn’t require any Docker or Ansible knowledge!

containerized-wordpress demo

Console output of running “containerized-wordpress” Ansible Playbook:

Console output of running "containerized-wordpress" Ansible Playbook

Accessing WordPress instance created from “containerized-wordpress” Ansible Playbook:

Accessing WordPress instance created from "containerized-wordpress" Ansible Playbook

Did I end up migrating to AWS in the end?

You bet. Thanks to efforts made in containerized-wordpress-project, I’m happy to report my whole WordPress migration to AWS was completed in matter of minutes and that this blog is now running on Docker and on AWS!

I hope this same project will help you take a leap in your migration.

Happy hacking!

21 May, 2017 04:28PM by Adnan Hodzic

Elena 'valhalla' Grandi

Modern XMPP Server

Modern XMPP Server

I've published a new HOWTO on my website www.trueelena.org/computers/ho:

www.enricozini.org/blog/2017/d already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


How



I've decided to install prosody.im/, mostly because it was recommended by the RTC QuickStart Guide rtcquickstart.org/; I've heard that similar results can be reached with www.ejabberd.im/ and other servers.

I'm also targeting www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites



You will need to enable the backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide rtcquickstart.org/guide/multi/ for more details.

On your firewall, you'll need to open the following TCP ports:


  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)



The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim en.wikipedia.org/wiki/Messagin).

prosody configuration



You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:


c2s_require_encryption = true
s2s_secure_auth = true



and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:


s2s_insecure_domains = { "gmail.com" }


virtualhosts



For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:


VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";
}


For the domains where you also want to enable MUCs, add the follwing lines:


Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"


the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):


Component "upload.chat.example.org" "http_upload"

The defaults are pretty sane, but see modules.prosody.im/mod_http_up for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules



Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


"something";

Most of these come from the prosody-modules package (and thus from modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.



  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.



  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.



  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.



  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.



  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.




@Gruppo Linux Como @LIFO

21 May, 2017 11:30AM by Elena ``of Valhalla''

May 20, 2017

hackergotchi for Neil Williams

Neil Williams

Software, service, data and freedom

Free software, free services but what about your data?

I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project.

So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases.

What else can we be doing? Well it was a simple question which started me thinking.

The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml

these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files?

Robert Marshall

Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?

Data Freedom

LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely.

What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.)

Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.)

At what point do these works become software? At what point do these need licensing? How could that be declared?

Perils of the Javascript Trap approach

When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap.

I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.)

I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services.

The same problems affect trying to untangle sharing the test job data within LAVA.

Adding Licence text

The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.

Adding Licence files

This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.

metadata:
  licence.text: http://mysite/lava/git/COPYING
  licence.name: BSD 3 clause

Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions.

Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.

Which licence?

This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).

Why not Creative Commons?

Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.

Results

Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access.

If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.

What happens next?

  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?

Contact

I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

20 May, 2017 07:24AM by Neil Williams

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Patanjali Research Foundation

PSA: Research in the domain of Ayurveda

http://www.patanjaliresearchfoundation.com/patanjali/
 

I am so glad to see this initiative taken by the Patanjali group. This is a great stepping stone in the health and wellness domain.

So far, Allopathy has been blunt in discarding alternate medicine practices, without much solid justification. The only, repetitive, response I've heard is "lack of research". This initiative definitely is a great step in that regard.

Ayurveda (Ancient Hindu art of healing) has a huge potential to touch lives. For the Indian sub-continent, this has the potential of a blessing.

The Prime Minister of India himself inaugurated the research centre.

Categories: 

Keywords: 

Like: 

20 May, 2017 04:46AM by Ritesh Raj Sarraf

May 19, 2017

hackergotchi for Clint Adams

Clint Adams

Help the Aged

I keep meeting girls from Walnut Creek who don’t know about the CDROM.

Posted on 2017-05-19
Tags: ranticore

19 May, 2017 04:10PM

hackergotchi for Michael Prokop

Michael Prokop

Debian stretch: changes in util-linux #newinstretch

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages

  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mesg: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie

New tools

  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc
    RESOURCE DESCRIPTION                                              LIMIT USED  USE%
    MSGMNI   Number of message queues                                 32000    0 0.00%
    MSGMAX   Max size of message (bytes)                               8192    -     -
    MSGMNB   Default max size of queue (bytes)                        16384    -     -
    SHMMNI   Shared memory segments                                    4096    0 0.00%
    SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
    SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
    SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
    SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
    SEMMNS   Total number of semaphores                          1024000000    0 0.00%
    SEMMSL   Max semaphores per semaphore set.                        32000    -     -
    SEMOPM   Max number of operations per semop(2)                      500    -     -
    SEMVMX   Semaphore max value                                      32767    -     -
    
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins 
      UID USER     PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS
        0 root        2        0        1            root
        1 daemon      0        0        1            daemon
        2 bin         0        0        1            bin
        3 sys         0        0        1            sys
        4 sync        0        0        1            sync
        5 games       0        0        1            games
        6 man         0        0        1            man
        7 lp          0        0        1            lp
        8 mail        0        0        1            mail
        9 news        0        0        1            news
       10 uucp        0        0        1            uucp
       13 proxy       0        0        1            proxy
       33 www-data    0        0        1            www-data
       34 backup      0        0        1            backup
       38 list        0        0        1            Mailing List Manager
       39 irc         0        0        1            ircd
       41 gnats       0        0        1            Gnats Bug-Reporting System (admin)
      100 _apt        0        0        1            
    65534 nobody      0        0        1            nobody
    
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns
            NS TYPE   NPROCS PID USER COMMAND
    4026531835 cgroup      2   1 root bash
    4026531837 user        2   1 root bash
    4026532473 mnt         2   1 root bash
    4026532474 uts         2   1 root bash
    4026532475 ipc         2   1 root bash
    4026532476 pid         2   1 root bash
    4026532478 net         2   1 root bash
    
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices

New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)

New available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details

flock (manage file locks from shell scripts):

-F, --no-fork            execute command without forking
    --verbose            increase verbosity

getty (open a terminal and set its mode):

--reload               reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on|off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format

New available --list column:

DIO  access backing file with direct-io

lsblk (list information about block devices):

-J, --json           use JSON output format

New available columns (for --output):

HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical          print physical instead of logical IDs

New available column:

DRAWER  logical drawer number

lslocks (list local system locks):

-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands:

-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes

New Options:

-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)

Available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list>     comma-separated list of swap options

New available columns (for --show):

UUID   swap uuid
LABEL  swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave|shared|private|unchanged   modify mount propagation in mount namespace
-s, --setgroups allow|deny                         control the setgroups syscall in user namespaces

Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

19 May, 2017 08:42AM by mika

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Children’s Perspectives on Critical Data Literacies

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this:
when flag clicked
if then user’s followers < 300
stop all.
I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.

19 May, 2017 12:51AM by Benjamin Mako Hill

May 18, 2017

hackergotchi for Alessio Treglia

Alessio Treglia

Digital Ipseity: Which Identity?

 

Within the next three years, more than seven billion people and businesses will be connected to the Internet. During this time of dramatic increases in access to the Internet, networks have seen an interesting proliferation of systems for digital identity management (i.e. our SPID in Italy). But what is really meant by “digital identity“? All these systems are implemented in order to have the utmost certainty that the data entered by the subscriber (address, name, birth, telephone, email, etc.) is directly coincident with that of the physical person. In other words, data are certified to be “identical” to those of the user; there is a perfect overlap between the digital page and the authentic user certificate: an “idem“, that is, an identity.

This identity is our personal records reflected on the net, nothing more than that. Obviously, this data needs to be appropriately protected from malicious attacks by means of strict privacy rules, as it contains so-called “sensitive” information, but this data itself is not sufficiently interesting for the commercial market, except for statistical purposes on homogeneous population groups. What may be a real goldmine for the “web company” is another type of information: user’s ipseity. It is important to immediately remove the strong semantic ambiguity that weighs on the notion of identity. There are two distinct meanings…

<Read More…[by Fabio Marzocca]>

18 May, 2017 08:50AM by Fabio Marzocca

hackergotchi for Michael Prokop

Michael Prokop

Debugging a mystery: ssh causing strange exit codes?

XKCD comic 1722

Recently we had a WTF moment at a customer of mine which is worth sharing.

In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper:

# ./ssh_wrapper 
return code = 1

But what we actually get is exit code 0:

# ./ssh_wrapper 
return code = 0

Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost command_does_not_exist >/dev/null 2>&1
exit "$?"
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 127

Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected.
What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug:

# cat ssh_wrapper 
chroot << "EOF" / /bin/sh
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Oh, but that works as expected!?

When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
unbuffer ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)?

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh -n root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Bingo! Quoting ssh(1):

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).
             This must be used when ssh is run in the background.  A common trick is
             to use this to run X11 programs on a remote machine.  For example,
             ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi,
             and the X11 connection will be automatically forwarded over an encrypted
             channel.  The ssh program will be put in the background.  (This does not work
             if ssh needs to ask for a password or passphrase; see also the -f option.)

Let’s execute the scripts through `strace -ff -s500 ./ssh_wrapper` to see what’s going in more detail.
In the strace run without ssh’s `-n` option we see that it’s cloning stdin (file descriptor 0), getting assigned to file descriptor 4:

dup(0)            = 4
[...]
read(4, "exit 1\n", 16384) = 7

while in the strace run with ssh’s `-n` option being present there’s no file descriptor duplication but only:

open("/dev/null", O_RDONLY) = 4

This matches ssh.c’s ssh_session2_open function (where stdin_null_flag corresponds to ssh’s `-n` option):

        if (stdin_null_flag) {                                            
                in = open(_PATH_DEVNULL, O_RDONLY);
        } else {
                in = dup(STDIN_FILENO);
        }

This behavior can also be simulated if we explicitly read from /dev/null, and this indeed works as well:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null </dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

The underlying problem is that both bash and ssh are consuming from stdin. This can be verified via:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
echo "Inner: pre"
while read line; do echo "Eat: $line" ; done
echo "Inner: post"
exit 3
EOF
echo "Outer: exit code = $?"

# ./ssh_wrapper
Inner: pre
Eat: echo "Inner: post"
Eat: exit 3
Outer: exit code = 0

This behavior applies to bash, ksh, mksh, posh and zsh. Only dash doesn’t show this behavior.
To understand the difference between bash and dash executions we can use the following test scripts:

# cat stdin-test-cmp
#!/bin/sh

TEST_SH=bash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-bash.out
TEST_SH=dash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-dash.out

# cat stdin-test
#!/bin/sh

: ${TEST_SH:=dash}

$TEST_SH <<"EOF"
echo "Inner: pre"
while read line; do echo "Eat: $line"; done
echo "Inner: post"
exit 3
EOF

echo "Outer: exit code = $?"

When executing `./stdin-test-cmp` and comparing the generated files stdin-test-bash.out and stdin-test-dash.out you’ll notice that dash consumes all stdin in one single go (a single `read(0, …)`), instead of character-by-character as specified by POSIX and implemented by bash, ksh, mksh, posh and zsh. See stdin-test-bash.out on the left side and stdin-test-dash.out on the right side in this screenshot:

screenshot of vimdiff on *.out files

So when ssh tries to read from stdin there’s nothing there anymore.

Quoting POSIX’s sh section:

When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell.

If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes.

So while we learned that both bash and ssh are consuming from stdin and this needs to prevented by either using ssh’s `-n` or explicitly specifying stdin, we also noticed that dash’s behavior is different from all the other main shells and could be considered a bug (which we reported as #862907).

Lessons learned:

  • Be aware of ssh’s `-n` option when using ssh/scp inside scripts.
  • Feeding shell scripts via stdin is not only error-prone but also very inefficient, as for a standards compliant implementation it requires a read(2) system call per byte of input. Instead create a temporary script you safely execute then.
  • When debugging problems make sure to explore different approaches and tools to ensure you’re not relying on a buggy behavior in any involved tool.

Thanks to Guillem Jover for review and feedback regarding this blog post.

18 May, 2017 07:29AM by mika

Tianon Gravi

My Docker Install Process (redux)

Since I wrote my first post on this topic, Docker has switched from apt.dockerproject.org to download.docker.com, so this post revisits my original steps, but tailored for the new repo.

There will be less commentary this time (straight to the beef). For further commentary on “why” for any step, see my previous post.

These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways.

grab Docker’s APT repo GPG key

# "Docker Release (CE deb)"

export GNUPGHOME="$(mktemp -d)"
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88

# stretch+
gpg --export --armor 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc

# jessie
# gpg --export 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg > /dev/null

rm -rf "$GNUPGHOME"

Verify:

$ apt-key list
...

/etc/apt/trusted.gpg.d/docker.gpg.asc
-------------------------------------
pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
sub   rsa4096 2017-02-22 [S]

...

add Docker’s APT source

With the switch to download.docker.com, HTTPS is now mandated:

$ apt-get update && apt-get install apt-transport-https

Setup sources.list:

echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list

Add edge component for every-month releases and test for release candidates (ie, ... stretch stable edge). Replace stretch with jessie for Jessie installs.

At this point, you should be safe to run apt-get update to verify the changes:

$ sudo apt-get update
...
Get:5 https://download.docker.com/linux/debian stretch/stable amd64 Packages [1227 B]
...
Reading package lists... Done

(There shouldn’t be any warnings or errors about missing keys, etc.)

configure Docker

This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later.

sudo mkdir -p /etc/docker
sudo sensible-editor /etc/docker/daemon.json

(sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default)

I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option).

If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver).

Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re just the two I personally use most often).

{
	"storage-driver": "overlay2"
}

configure boot parameters

I usually set a few boot parameters as well (in /etc/default/grub’s GRUB_CMDLINE_LINUX_DEFAULT option – run sudo update-grub after adding these, space-separated).

  • cgroup_enable=memory – enable “memory accounting” for containers (allows docker run --memory for setting hard memory limits on containers)
  • swapaccount=1 – enable “swap accounting” for containers (allows docker run --memory-swap for setting hard swap memory limits on containers)
  • systemd.legacy_systemd_cgroup_controller=yes – newer versions of systemd may disable the legacy cgroup interfaces Docker currently uses; this instructs systemd to keep those enabled (for more details, see systemd/systemd#4628, opencontainers/runc#1175, docker/docker#28109)
  • vsyscall=emulate – allow older binaries to run (debian:wheezy, etc.; see docker/docker#28705)

All together:

...
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.legacy_systemd_cgroup_controller=yes vsyscall=emulate"
...

install Docker!

Finally, the time has come.

$ sudo apt-get install -V docker-ce
...
   docker-ce (17.03.1~ce-0~debian-stretch)
...

$ sudo docker version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:07:28 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:07:28 2017
 OS/Arch:      linux/amd64
 Experimental: false

$ sudo usermod -aG docker "$(id -un)"

18 May, 2017 06:00AM by Tianon Gravi (admwiggin@gmail.com)

May 17, 2017

hackergotchi for Daniel Pocock

Daniel Pocock

Hacking the food chain in Switzerland

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

17 May, 2017 06:41PM by Daniel.Pocock

Reproducible builds folks

Reproducible Builds: week 107 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 7 and Saturday May 13 2017:

Report from Reproducible Builds Hamburg Hackathon

We were 16 participants from 12 projects: 7 Debian, 2 repeatr.io, 1 ArchLinux, 1 coreboot + LEDE, 1 F-Droid, 1 ElectroBSD + privoxy, 1 GNU R, 1 in-toto.io, 1 Meson and 1 openSUSE. Three people came from the USA, 3 from the UK, 2 Finland, 1 Austria, 1 Denmark and 6 from Germany, plus we several guests from our gracious hosts at the CCCHH hackerspace as well as a guest from Australia…

We had four presentations:

Some of the things we worked on:

  • h01ger did orga stuff for this very hackathon, discussed tests.r-b.o with various non-Debian contributors, filed some bugs and restarted the policy discussion in #844431. He also did some polishing work on tests.r-b.o which shall be covered in next issue of our weekly blog.
  • Justin Cappos involved many of us in interesting discussions and started to write an academic paper about Reproducible Builds of which he shared an early beta on our mailinglist.
  • Chris Lamb (lamby) filed a number of patches for individual packages, worked on diffoscope, merged many changes to strip-nondeterminism and also filed #862073 against dak to upload buildinfo files to external services.
  • Maria Glukhova (siamezzze) fixed a bug with plots on tests.reproducible-builds.org and worked on diffoscope test coverage.
  • Lynxis worked on a new squashfs upstream release improving support for reproducible squashfs filesystems and also had some time to hack on coreboot and show others how to install coreboot on real hardware.
  • Michael Poehn worked on integrating F-Droid builds into tests.reproducible-builds.org, on the F-Droid verification utility and also ran some app reproducibility tests.
  • Bernhard worked on various unreproducible issues upstream and submitted fixes for curl, bzr, ant.
  • Erin Myhre worked on bootstrapping cleanroom builds of compiler components in Repeatr sandboxes.
  • Calvin Behling merged improvements to reppl for a cleaner storage format and better error handling and did design work for next version of repeatr pipeline execution. Calvin also lead the reproducibility testing of restaurant mood lighting.
  • Eric and Calvin also claim to have had all sorts of useful exchanges about the state of other projects, and learned a lot about where to look for more info about debian bootstrap and archive mirroring from steven and lamby :)
  • Phil Hands came by to say hi and worked on testing d-i on jenkins.debian.net.
  • Chris West (Faux) worked on extending misc.git:has-only.py, and started looking at Britney.

We had a Debian focussed meeting where we discussed a number of topics:

  • IRC meetings: yes, we want to try again to have them, monthly, a poll for a good date is being held.
  • Debian tests post Stretch: we'll add tests for stable/Stretch.
  • .buildinfo files, how forward: we need sourceful uploads for any arch:all packages. dak should send .buildinfo files to buildinfo.debian.net.
  • (pre?) Stretch release press release: we should do that, esp. as our achievements are largely unrelated to Stretch.
  • Reproducible Builds Summit 3: yes, we want that.
  • what to do (in notes.git) with resolved issues: keep the issues.
  • strip-nondeterminism quo vadis: Justin reminded us that strip-nondeterminism is a workaround we want to get rid off.

And then we also had a lot of fun in the hackerspace, enjoying some of their gimmicks, such as being able to open physical doors with ssh or controlling light and music with an webbrowser without authentication (besides being in the right network).

Not quite the hackathon

(This wasn't the hackathon per-se, but some of us appreciated these sights and so we thought you would too.)

Many thanks to:

  • Debian for sponsoring food and accomodation!
  • Dock Europe for providing us with really nice accomodation in the house!
  • CCC Hamburg for letting us use their hackerspace for >3 days non-stop!

News and media coverage

openSUSE has had a security breach in their infrastructure, including their build services. As of this writing, the scope and impact are still unclear, however the incident illustrates that no one should rely on being able to secure their infrastructure at all times. Reproducible Builds help mitigate this by allowing independent verification of build results, by parties that are unaffected by the compromise.

(Whilst this can happen to anyone. Kudos to openSUSE for being open about it. Now let's continue working on Reproducible Builds everywhere!)

On May 13th Chris Lamb gave a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania.

OSCAL 2017

Toolchain bug reports and fixes

Packages' bug reports

Reviews of unreproducible packages

11 package reviews have been added, 2562 have been updated and 278 have been removed in this week, adding to our knowledge about identified issues. Most of the updates were to move ~1800 packages affected by the generic catch-all captures_build_path (out of ~2600 total) to the more specific gcc_captures_build_path, fixed by our proposed patches to GCC.

5 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Chris Lamb (2)
  • Chris West (1)

diffoscope development

diffoscope development continued on the experimental branch:

  • Maria Glukhova:
    • Code refactoring and more tests.
  • Chris Lamb:
    • Add safeguards against unpacking recursive or deeply-nested archives. (Closes: #780761)

strip-nondeterminism development

  • strip-nondeterminism 0.033-1 and -2 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Bernhard M. Wiedemann:

    • Add cpio handler.
    • Code quality improvements.
  • Chris Lamb:
    • Add documentation and increase verbosity, in support of the long-term aim of removing the need for this tool.

reprotest development

  • reprotest 0.6.1 and 0.6.2 were uploaded to unstable by Ximin Luo. It included contributions from:

  • Ximin Luo:

    • Add a documentation section on "Known bugs".
    • Move developer documentation away from the man page.
    • Mention release instructions in the previous changelog.
    • Preserve directory structure when copying artifacts. Otherwise hash output on a successful reproduction sometimes fails, because find(1) can't find the artifacts using the original artifact_pattern.
  • Chris Lamb
    • Add proper release instructions and a keyring.

trydiffoscope development

  • Chris Lamb:
    • Uses the diffoscope from Debian experimental if possible.

Misc.

This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

17 May, 2017 04:08PM

Jamie McClelland

Late to the Raspberry Pi party

I finally bought my first raspberry pi to setup as a router and wifi access point.

It wasn't easy.

I first had to figure out what to buy. I think that was the hardest part.

I ended up with:

  • Raspberry PI 3 Model B A1.2GHz 64-bit quad-core ARMv8 CPU, 1GB RAM (Model number: RASPBERRYPI3-MODB-1GB)
  • Transcend USB 3.0 SDHC / SDXC / microSDHC / SDXC Card Reader, TS-RDF5K (Black). I only needed this because I don't have one already and I will need a way to copy a raspbian image from my laptop to a micro SD card.
  • Centon Electronics Micro SD Card 16 GB (S1-MSDHC4-16G). This is the micro sd card.
  • Smraza Clear case for Raspberry Pi 3 2 Model B with Power Supply,2pcs Heatsinks and Micro USB with On/Off Switch. And this is the box to put it all in.

I already have a cable matters USB to ethernet device, which will provide the second ethernet connection so this device can actually work as a router.

I studiously followed the directions to download the raspbian image and copy it to my micro sd card. I also touched a file on the boot partition called ssh so ssh would start automatically. Note: I first touched the ssh file on the root partition (sdb2) before realizing it belonged on the boot partition (sdb1). And, despite ambiguous directions found on the Internet, lowercase 'ssh' for the filename seems to do the trick.

Then, I found the IP address with the help of NMAP (sudo nmap -sn 192.168.69.*) and tried to ssh in but alas...

Connection reset by 192.168.69.116 port 22

No dice.

So, I re-mounted the sdb2 partition of the micro sd card and looked in var/log/auth.log and found:

May  5 19:23:00 raspberrypi sshd[760]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
May  5 19:23:00 raspberrypi sshd[760]: fatal: No supported key exchange algorithms [preauth]
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format

How did that happen? And wait a minute...

0 jamie@turkey:~$ ls -l /mnt/etc/ssh/ssh_host_ecdsa_key
-rw------- 1 root root 0 Apr 10 05:58 /mnt/etc/ssh/ssh_host_ecdsa_key
0 jamie@turkey:~$ date
Fri May  5 15:44:15 EDT 2017
0 jamie@turkey:~$

Are the keys embedded in the image? Isn't that wrong?

I fixed with:

0 jamie@turkey:mnt$ sudo rm /mnt/etc/ssh/ssh_host_*
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_rsa_key -N '' -t rsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_dsa_key -N '' -t dsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ed25519_key -N '' -t ed25519
0 jamie@turkey:mnt$

NOTE: I just did a second installation and this didn't happen. Maybe something went wrong as I experiment with SSH vs ssh on the boot partition?

Then I could ssh in. I removed the pi user account and added my ssh key to /root/.ssh/authorized_keys and put a new name "mondragon" in the /etc/hostname file.

And... I upgraded to Debian stretch and rebooted.

Then, I followed these instructions for fixing the wifi (replacing the firmware does still work for me).

I plugged my cable matters USB/Ethernet adapter into the device so it would be recognized, but left it dis-connected.

Next I started to configure the device to be a wifi access point using this excellend tutorial, but decided I wanted to setup my networks using systemd-networkd instead.

Since /etc/network/interaces already had eth0 set to manual (because apparently it is controlled by dhcpcd instead), I didn't need any modifications there.

However, I wanted to use the dhcp client built-in to systemd-networkd, so to prevent dhcpcd from obtaining an IP address, I purged dhcpcd:

apt-get purge dhcpcd5

I was planning to also use systemd-networkd to name the devices (using *.link files) but nothing I could do could convince systemd to rename them, so I gave up and added /etc/udev/rules.d/70-persistent-net.rules:

    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:ce:b5:c3", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="wan"
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="a0:ce:c8:01:20:7d", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="lan"
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:9b:e0:96", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="wlan"

(If you are copying and pasting the mac addresses will have to change.)

Then I added the following files:

root@mondragon:~# head /etc/systemd/network/*
==> /etc/systemd/network/50-lan.network <==
[Match]
Name=lan

[Network]
Address=192.168.69.1/24

==> /etc/systemd/network/55-wlan.network <==
[Match]
Name=wlan

[Network]
Address=10.0.69.1/24

==> /etc/systemd/network/60-wan.network <==
[Match]
Name=wan

[Network]
DHCP=v4
IPForward=yes
IPMasquerade=yes
root@mondragon:~#

Sadly, IPMasquerade doesn't seem to work either for some reason, so...

root@mondragon:~# cat /etc/systemd/system/masquerade.service 
[Unit]
Description=Start masquerading because Masquerade=yes not working in wan.network.

[Service]
Type=oneshot
ExecStart=/sbin/iptables -t nat -A POSTROUTING -o wan -j MASQUERADE

[Install]
WantedBy=network.target
root@mondragon:~#

And, systemd DHCPServer worked, but then it didn't and I couldn't figure out how to debug, so...

apt-get install dnsmasq

Followed by:

root@mondragon:~# cat /etc/dnsmasq.d/mondragon.conf 
# Don't provide DNS services (unbound does that).
port=0

interface=lan
interface=wlan

# Only provide dhcp services since systemd-networkd dhcpserver seems
# flakey.
dhcp-range=set:cable,192.168.69.100,192.168.69.150,255.255.255.0,4h
dhcp-option=tag:cable,option:dns-server,192.168.69.1
dhcp-option=tag:cable,option:router,192.168.69.1

dhcp-range=set:wifi,10.0.69.100,10.0.69.150,255.255.255.0,4h
dhcp-option=tag:wifi,option:dns-server,10.0.69.1
dhcp-option=tag:wifi,option:router,10.0.69.1

root@mondragon:~#

It would probably be simpler to have dnsmasq provide DNS service also, but I happen to like unbound:

apt-get install unbound

And...

root@mondragon:~# cat /etc/unbound/unbound.conf.d/server.conf 
server:
    interface: 127.0.0.1
    interface: 192.168.69.1
    interface: 10.0.69.1

    access-control: 192.168.69.0/24 allow
    access-control: 10.0.69.0/24 allow

    # We do query localhost for our stub zone: loc.cx
    do-not-query-localhost: no

    # Up this level when debugging.
    log-queries: no
    logfile: ""
    #verbosity: 1

    # Settings to work better with systemcd
    do-daemonize: no
    pidfile: ""
root@mondragon:~# 

Now on to the wifi access point.

apt-get install hostapd

And the configuration file:

root@mondragon:~# cat /etc/hostapd/hostapd.conf
# This is the name of the WiFi interface we configured above
interface=wlan

# Use the nl80211 driver with the brcmfmac driver
driver=nl80211

# This is the name of the network
ssid=peacock

# Use the 2.4GHz band
hw_mode=g

# Use channel 6
channel=6

# Enable 802.11n
ieee80211n=1

# Enable WMM
wmm_enabled=1

# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]

# Accept all MAC addresses
macaddr_acl=0

# Use WPA authentication
auth_algs=1

# Require clients to know the network name
ignore_broadcast_ssid=0

# Use WPA2
wpa=2

# Use a pre-shared key
wpa_key_mgmt=WPA-PSK

# The network passphrase
wpa_passphrase=xxxxxxxxxxxx

# Use AES, instead of TKIP
rsn_pairwise=CCMP
root@mondragon:~#

The hostapd package doesn't have a systemd start up file so I added one:

root@mondragon:~# cat /etc/systemd/system/hostapd.service 
[Unit]
Description=Hostapd IEEE 802.11 AP, IEEE 802.1X/WPA/WPA2/EAP/RADIUS Authenticator
Wants=network.target
Before=network.target
Before=network.service

[Service]
ExecStart=/usr/sbin/hostapd /etc/hostapd/hostapd.conf

[Install]
WantedBy=multi-user.target
root@mondragon:~#

My last step was to modify /etc/ssh/sshd_config so it only listens on the lan and wlan interfaces (listening on wlan is a bit of a risk, but also useful when mucking with the lan network settings to ensure I don't get locked out).

17 May, 2017 02:46PM

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.14

Weblate 2.14 has been released today slightly ahead of the schedule. There are quite a lot of security improvements based on reports we got from HackerOne program, API extensions and other minor improvements.

Full list of changes:

  • Add glossary entries using AJAX.
  • The logout now uses POST to avoid CSRF.
  • The API key token reset now uses POST to avoid CSRF.
  • Weblate sets Content-Security-Policy by default.
  • The local editor URL is validated to avoid self-XSS.
  • The password is now validated against common flaws by default.
  • Notify users about imporant activity with their account such as password change.
  • The CSV exports now escape potential formulas.
  • Various minor improvements in security.
  • The authentication attempts are now rate limited.
  • Suggestion content is stored in the history.
  • Store important account activity in audit log.
  • Ask for password confirmation when removing account or adding new associations.
  • Show time when suggestion has been made.
  • There is new quality check for trailing semicolon.
  • Ensure that search links can be shared.
  • Included source string information and screenshots in the API.
  • Allow to overwrite translations through API upload.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

17 May, 2017 02:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Upcoming Rcpp Talks

Very excited about the next few weeks which will cover a number of R conferences, workshops or classes with talks, mostly around Rcpp and one notable exception:

  • May 19: Rcpp: From Simple Examples to Machine learning, pre-conference workshop at our R/Finance 2017 conference here in Chicago

  • May 26: Extending R with C++: Motivation and Examples, invited keynote at R à Québec 2017 at Université Laval in Quebec City, Canada

  • June 28-29: Higher-Performance R Programming with C++ Extensions, two-day course at the Zuerich R Courses @ U Zuerich in Zuerich, Switzerland

  • July 3: Rcpp at 1000+ reverse depends: Some Lessons Learned (working title), at DSC 2017 preceding useR! 2017 in Brussels, Belgium

  • July 4: Extending R with C++: Motivation, Introduction and Examples, tutorial preceding useR! 2017 in Brussels, Belgium

  • July 5, 6, or 7: Hosting Data Packages via drat: A Case Study with Hurricane Exposure Data, accepted presentation, joint with Brooke Anderson

If you are near one those events, interested and able to register (for the events requiring registration), I would love to chat before or after.

17 May, 2017 02:37AM

May 16, 2017

Enrico Zini

Accident on the motorway

There was an accident on the motorway, luckily noone got seriously wounded, but a truckful of sugar and a truckful of cereals completely spilled on the motorway, and took some time to clean.

19:15:23 19:45:07 20:02:37 20:11:52 20:28:43 20:32:34 20:44:03 21:27:41 21:44:20 22:10:50

16 May, 2017 09:12PM

hackergotchi for Daniel Pocock

Daniel Pocock

Building an antenna and receiving ham and shortwave stations with SDR

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

16 May, 2017 06:34PM by Daniel.Pocock

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, April 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, about 190 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 19.5 hours (out of 16h allocated + 5.5 remaining hours, thus keeping 2 extra hours for May).
  • Ben Hutchings did 12 hours (out of 15h allocated, thus keeping 3 extra hours for May).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 17.5 hours (out of 16 hours allocated + 3.5 hours remaining, thus keeping 2 hours for May).
  • Guido Günther did 12 hours (out of 8 hours allocated + 4 hours remaining).
  • Hugo Lefeuvre did 15.5 hours (out of 6 hours allocated + 9.5 hours remaining).
  • Jonas Meurer did nothing (out of 4 hours allocated + 3.5 hours remaining, thus keeping 7.5 hours for May).
  • Markus Koschany did 23.75 hours.
  • Ola Lundqvist did 14 hours (out of 20h allocated, thus keeping 6 extra hours for May).
  • Raphaël Hertzog did 11.25 hours (out of 10 hours allocated + 1.25 hours remaining).
  • Roberto C. Sanchez did 16.5 hours (out of 20 hours allocated + 1 hour remaining, thus keeping 4.5 extra hours for May).
  • Thorsten Alteholz did 23.75 hours.

Evolution of the situation

The number of sponsored hours decreased slightly and we’re now again a little behind our objective.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file 37. The number of open issues is comparable to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 May, 2017 03:52PM by Raphaël Hertzog

hackergotchi for Francois Marier

Francois Marier

Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty).

After showing the boot screen for about 30 seconds, a busybox shell pops up:

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.

(initramfs)

Typing exit will display more information about the failure before bringing us back to the same busybox shell:

Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  

(initramfs)

which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found.

There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk

First, create bootable USB disk using the latest Ubuntu installer:

  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):

     dd if=ubuntu.iso of=/dev/sdc1
    

and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition

Assuming a drive which is partitioned this way:

  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition

Open a terminal and mount the required partitions:

cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev

Note:

  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).

  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition

Then "enter" the root partition using:

chroot /mnt

and make sure that the lvm2 package is installed:

apt install lvm2

before regenerating the initramfs for all of the installed kernels:

update-initramfs -c -k all

16 May, 2017 04:10AM

May 15, 2017

hackergotchi for Gunnar Wolf

Gunnar Wolf

Starting a project on private and anonymous network usage

I am starting a work with the students of LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre, Free Software Research and Development Laboratory) of the Engineering Faculty of UNAM:

We want to dig into the technical and social implications of mechanisms that provide for anonymous, private usage of the network. We will have our first formal work session this Wednesday, for which we have invited several interesting people to join the discussion and help provide a path for our oncoming work. Our invited and confirmed guests are, in alphabetical order:

  • Salvador Alcántar (Wikimedia México)
  • Sandino Araico (1101)
  • Gina Gallegos (ESIME Culhuacán)
  • Juliana Guerra (Derechos Digitales)
  • Jacobo Nájera (Enjambre Digital)
  • Raúl Ornelas (Instituto de Investigaciones Económicas)

  • As well as LIDSOL's own teachers and students.
    This first session is mostly exploratory, we should keep notes and decide which directions to pursue to begin with. Do note that by "research" we are starting from the undergraduate student level — Not that we want to start by changing the world. But we do want to empower the students who have joined our laboratory to change themselves and change the world. Of course, helping such goals via the knowledge and involvement of projects (not just the tools!) such as Tor.

15 May, 2017 04:43PM by gwolf

hackergotchi for Michal Čihař

Michal Čihař

New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue was over one month long, so it's time to process it and include new project.

This time, the newly hosted projects include:

We now also host few new Minetest mods:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

15 May, 2017 04:00PM

intrigeri

GNOME and Debian usability testing, May 2017

During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session. Six people were tasked with testing a few aspects of the GNOME 3.22 desktop environment and of the Debian 9 (Stretch) operating system. A number of other people observed them and took notes. Then, two observers and three testers analyzed the results, that we are hereby presenting: we created a heat map visualization, summed up the challenges met during the tests, and wrote this blog post together. We will point the relevant upstream projects to our results.

A couple of other people also did some usability testing but went in much more depth: their feedback is much more detailed and comes with a number of improvement ideas. I will process and publish their results as soon as possible.

Missions

Testers were provided a laptop running GNOME on a a Debian 9 (Stretch) Live system. A quick introduction (mostly copied from the one we found in some GNOME usability testing reports) was read. Then they were asked to complete the following tasks.

A. Nautilus

Mission A.1 — Download and rename file in Nautilus

  1. Download a file from the web, a PDF document for example.
  2. Open the folder in which the file has been downloaded.
  3. Rename the dowloaded file to SUCCESS.pdf.
  4. Toggle the browser window to full screen.
  5. Open the file SUCCESS.pdf.
  6. Go back to the File manager.
  7. Close the file SUCCESS.pdf.

Mission A.2 — Manipulate folders in Nautilus

  1. Create a new folder named cats in your user directory.
  2. Create a new folder named to do in your user directory.
  3. Move the cats folder to the to do folder.
  4. Delete the cats folder.

Mission A.3 — Create a bookmark in Nautilus

  1. Create a folder named unicorns in your personal directory.
  2. This folder is important. Add a bookmark for unicorns in order to find it again in a few weeks.

Mission A.4 — Nautilus display settings

Folders and files are usually listed as icons, but they can also be displayed differently.

  1. Configure the File manager to make it show items as a list, with one file per line.
  2. You forgot your glasses and the font size is too small for you to see the text: increase the size of the text.

B. Package management

Introduction

On Debian, each application is available as a "package" which contains every file needed for the software to work.

Unlike in other operating systems, it is rarely necessary and almost never a good idea, to download and install software from the authors website. We can rather install it from an online library managed by Debian (like an appstore). This alternative offers several advantages, such as being able to update all the software installed in one single action.

Specific tools are available to install and update Debian packages.

Mission B.1 — Install and remove packages

  1. Install the vlc package.
  2. Start VLC.
  3. Remove the vlc package.

Mission B.2 — Search and install a package

  1. Find a piece of software which can download files with BitTorrent in a graphical interface.
  2. Install the corresponding package.
  3. Launch that BitTorrent software.

Mission B.3 — Upgrade the system

Make sure the whole system (meaning all installed packages) is up to date.

C. Settings

Mission C.1 — Change the desktop background

  1. Download an image you like from the web.
  2. Set the downloaded image as the desktop wallpaper.

Mission C.2 — Tweak temporary files management

Configure the system so that temporary files older than three days are deleted automatically.

Mission C.3 — Change the default video player

  1. Install VLC (ask for help if you could not do it during the previous mission).
  2. Make VLC the default video player.
  3. Download a video file from the web.
  4. Open the downloaded video, then check if it opens with VLC.

Mission C.4 — Add and remove world clocks

When you click the time and date in the top bar, a menu pops-up. There, you can display clocks in several time-zones.

  1. Add a clock with Rio de Janeiro timezone, then another showing the current time in Boston.
  2. Check that the time and date menu now displays these two additional clocks.
  3. Remove the Boston clock.

Results and analysis

Heat map

We used Jim Hall's heat map technique to summarize our usability test results. As Renata puts it, it is "a great way to see how the users performed on each task. The heat map clarifies how easy or difficult it was for the participant to accomplish a certain task.

  1. Scenario tasks (from the usability test) are arranged in rows.
  2. Test participants (for each tester) are arranged in columns.
  3. The colored blocks represent each tester’s difficulty with each scenario task.

Green blocks represent the ability of the participant to accomplish the tasks with little or no difficulty.

Yellow blocks indicate the tasks that the tester had significant difficulties accomplishing.

Red blocks indicate that testers experienced extreme difficulty or where testers completed the tasks incorrectly.

Black blocks indicate tasks the tester was unable to complete."

Alternatively, here is the spreadsheet that was used to create this picture, with added text to avoid relying on colors only.

Most tasks were accomplished with little or no difficulty so we will now focus on the problematic parts.

What were the challenges?

The heat map shows several "hot" rows, that we will now be looking at in more details.

Mission A.3 — Create a bookmark in Nautilus

Most testers right-clicked the folder first, and eventually found they could simply drag'n'drop to the bookmarks location in the sidebar.

One tester thought that he could select a folder, click the hamburger icon, and from there use the "Bookmark this folder" menu item. However, this menu action only works on the folder one has entered, not on the selected one.

Mission B.1 — Install and remove a package

Here we faced a number of issues caused by the fact that Debian Live images don't include package indices (with good reason), so no package manager can list available software.

Everyone managed to start a graphical package manager via the Overview (or via the CLI or Alt-F2 for a couple power users).

Some testers tried to use GNOME Software, which listed only already installed packages (Debian bug #862560) and provided no way we could find to refresh the package indices. That's arguably a bug in Debian Live, but still: GNOME Software might display some useful information when it detects this unusual situation.

We won't list here all the obstacles that were met in Synaptic: it's no news its usability is rather sub-optimal and better alternatives (such as GNOME Software) are in the works.

Mission C.2 — Tweak temporary files management

The mission was poorly phrased: some observers had to clarify that it was specifically about GNOME, and not generic Linux system administration: some power-users were already searching the web for command-line tools to address the task at hand.

Even with this clarification, no tester would have succeeded without being told they were allowed to use the web with a search query including the word "GNOME", or use the GNOME help or the Overview. Yet eventually all testers succeeded.

It's interesting to note that regular GNOME users had the same problem as others: they did not try searching "temporary" in the Overview and did not look-up the GNOME Help until they were suggested to do so.

Mission C.3 — Change the default video player

One tester configured one single video file format to be opened by default with VLC, via right-click in Nautilus → Open with → etc. He believed this would be enough to make VLC the default video player, missing the subtle difference between "default video player" and "default player for one single video format".

One tester tried to complete this task inside VLC itself and then needed some help to succeed. It might be that the way web browsers ask "Do you want ThisBrowser to become the default web browser?" gave a hint an application GUI is the right place to do it.

Two testers searched "default" in the Overview (perhaps the previous mission dealing with temporary files was enough to put them in this direction). At least one tester was confused since the only search result (Details – View information about your system), which is the correct one to get there, includes the word View, which suggests that one cannot modify settings there, but only view them.

One long-term GNOME user looked in Tweak Tool first, and then used the Overview.

Here again, GNOME users experienced essentially the same issues as others.

Mission C.4 — Add and remove world clocks

One tester tried to look for the clock on the top right corner of the screen, then realized it was in the middle. Other than this, all testers easily found a way to add world clocks.

However, removing a world clock was rather difficult; although most testers managed to do it, it took them a number of attempts to succeed:

  1. Several testers left-clicked or right-clicked the clock they wanted to remove, expecting this would provide them with a way to remove it (which is not the case).
  2. After a while, all testers noticed the Select button (that has no text label nor tooltip info), which allowed them to select the clock they wanted to remove; then, most testers clicked the 1 selected button, hoping it would provide a contextual menu or some other way to act on the selected clocks (it doesn't).
  3. Eventually, everyone managed to locate the Delete button on the bottom right corner of the window; some testers mentioned that it is less visible and flashy than the blue bar that appears on the top of the screen once they had entered "Selection" mode.

General notes and observations

  • None of the participants sollicited the GNOME Help, which is unfortunate knowing its:
    • great quality;
    • translations in several languages;
    • availability and adaptability to regional specifications;
    • adequacy to the currently running version of GNOME.

    Some users found the relevant help page online via web searches; others initially ignored it among search results, then looked for it later after being told that the mission was more about GNOME.

  • Whether testers were already GNOME users or not seldom impacted their chances of success.
  • Unfortunately, we haven't compiled enough information about the testers to provide useful data about who they are and what their background is. Still, we had an interesting mix in terms of genders, age (between 17 and 52 years old), skin color and computer experience.

15 May, 2017 12:55PM

May 14, 2017

hackergotchi for Steve Kemp

Steve Kemp

Some minor updates ..

The past few weeks have been randomly busy, nothing huge has happened, but several minor diversions.

Coding

I made a new release of my console-based mail-client, with integrated Lua scripting, this is available for download over at https://lumail.org/

I've also given a talk (!!) on using a literate/markdown configuration for GNU Emacs. In brief I created two files:

~/.emacs/init.md

This contains both my configuration of GNU Emacs as well as documentation for the same. Neat.

~/.emacs/init.el

This parse the previous file, specifically looking for "code blocks" which are then extracted and evaluated.

This system is easy to maintain, and I'm quite happy with it :)

Fuzzing

Somebody nice took the time to report a couple of bugs against my simple bytecode-intepretting virtual-machine project - all found via fuzzing.

I've done some fun fuzzing of my own in the past, so this was nice to see. I've now resolved those bugs, and updated the README.md file to include instructions on fuzzing it. (Which I started doing myself, after receiving the first of the reports )

Finally I have more personal news too: I had a pair of CT-scans carried out recently, and apparently here in sunny Finland (that's me being ironic, it was snowing in the first week of May) when you undergo a CT-scan you can pay to obtain your data on CD-ROM.

I'm 100% definitely going to get a copy of my brain-scan data. I'll be able to view a 3d-rendered model of my own brain on my desktop. (Once upon a time I worked for a company that produced software, sold to doctors/surgeons, for creating 3d-rendered volumes from individual slices. I confirmed with the radiologist that handled my tests that they do indeed use the standard DICOM format. Small world.)

14 May, 2017 09:00PM

Bits from Debian

New Debian Developers and Maintainers (March and April 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Guilhem Moulin (guilhem)
  • Lisa Baron (jeffity)
  • Punit Agrawal (punit)

The following contributors were added as Debian Maintainers in the last two months:

  • Sebastien Jodogne
  • Félix Lechner
  • Uli Scholler
  • Aurélien Couderc
  • Ondřej Kobližek
  • Patricio Paez

Congratulations!

14 May, 2017 12:30PM by Jean-Pierre Giraud

Russ Allbery

Review: The Raven and the Reindeer

Review: The Raven and the Reindeer, by T. Kingfisher

Publisher: Red Wombat Tea Company
Copyright: 2016
ASIN: B01BKTT73A
Format: Kindle
Pages: 191

Once upon a time, there was a boy born with frost in his eyes and frost in his heart.

There are a hundred stories about why this happens. Some of them are close to true. Most of them are merely there to absolve the rest of us of blame.

It happens. Sometimes it's no one's fault.

Kay is the boy with frost in his heart. Gerta grew up next door. They were inseparable as children, playing together on cold winter days. Gerta was in love with Kay for as long as she could remember. Kay, on the other hand, was, well, kind of a jerk.

There are not many stories about this sort of thing. There ought to be more. Perhaps if there were, the Gertas of the world would learn to recognize it.

Perhaps not. It is hard to see a story when you are standing in the middle of it.

Then, one night, Kay is kidnapped in the middle of the night by the Snow Queen while Gerta watches, helpless. She's convinced that she's dreaming, but when she wakes up, Kay is indeed gone, and eventually the villagers stop the search. But Gerta has defined herself around Kay her whole life, so she sets off, determined to find him, totally unprepared for the journey but filled with enough stubborn, practical persistence to overcome a surprising number of obstacles.

Depending on your past reading experience (and cultural consumption in general), there are two things that may be immediately obvious from this beginning. First, it's written by Ursula Vernon, under her T. Kingfisher pseudonym that she uses for more adult fiction. No one else has quite that same turn of phrase, or writes protagonists with quite the same sort of overwhelmed but stubborn determination. Second, it's a retelling of Hans Christian Andersen's "The Snow Queen."

I knew the first, obviously. I was completely oblivious to the second, having never read "The Snow Queen," or anything else by Andersen for that matter. I haven't even seen Frozen. I therefore can't comment in too much detail on the parallels and divergences between Kingfisher's telling and Andersen's (although you can read the original to compare if you want) other than some research on Wikipedia. As you might be able to tell from the quote above, though, Kingfisher is rather less impressed by the idea of childhood true love than Andersen was. This is not the sort of story in which the protagonist rescues the captive boy through the power of pure love. It's something quite a bit more complicated and interesting: a coming-of-age story for Gerta, in which her innocence is much less valuable than her fundamental decency, empathy, and courage, and in which her motives for her journey change as the journey proceeds. It helps that Kingfisher's world is populated by less idealized characters, many of whom are neither wholly bad nor wholly good, but who think of themselves as basically decent and try to do vaguely the right thing. Although sometimes they need some reminding.

The story does feature a talking raven. (Most certainly not a crow.) His name is the Sound of Mouse Bones Crunching Under the Hooves of God. He's quite possibly the best part.

Gerta does not rescue Kay through the power of pure love. But there is love here, of a sort that Gerta wasn't expecting at all, and of a sort that Andersen never had in mind when he wrote the original. There's also some beautifully-described shapeshifting, delightful old women, and otters. (Also, I find the boy who appears at the very end of the story utterly fascinating, with all his implied parallel story and the implicit recognition that the world does not revolve around Kay and Greta.) But I think my favorite part is how clearly different Greta is at the end of her journey than at the beginning, how subtly Kingfisher makes that happen through the course of the story, and how understated but just right her actions are at the very end.

This is really excellent stuff. The next time you're feeling in the mood for a retold and modernized fairy tale, I recommend it.

Rating: 8 out of 10

14 May, 2017 12:08AM

May 13, 2017

Vincent Fourmond

Run QSoas complely non-interactively

QSoas can run scripts, and, since version 2.0, it can be run completely without user interaction from the command-line (though an interface may be briefly displayed). This possibility relies on the following command-line options:

  • --run, which runs the command given on the command-line;
  • --exit-after-running, which closes automatically QSoas after all the commands specified by --run were run;
  • --stdout (since version 2.1), which redirects QSoas's terminal directly to the shell output.
If you create a script.cmds file containing the following commands:
generate-buffer -10 10 sin(x)
save sin.dat
and run the following command from your favorite command-line interpreter:
~ QSoas --stdout --run '@ script.cmds' --exit-after-running
This will create a sin.dat file containing a sinusoid. However, if you run it twice, a Overwrite file 'sin.dat' ? dialog box will pop up. You can prevent that by adding the /overwrite=true option to save. As a general rule, you should avoid all commands that may ask questions in the scripts; a /overwrite=true option is also available for save-buffers for instance.

I use this possibility massively because I don't like to store processed files, I prefer to store the original data files and run a script to generate the processed data when I want to plot or to further process them. It can also be used to generate fitted data from saved parameters files. I use this to run automatic tests on Linux, Windows and Mac for every single build, in order to quickly spot platform-specific regressions.

To help you make use of this possibility, here is a shell function (Linux/Mac users only, add to your $HOME/.bashrc file or equivalent, and restart a terminal) to run directly on QSoas command files:

qs-run () {
        QSoas --stdout --run "@ $1" --exit-after-running
}
To run the script.cmds script above, just run
~ qs-run script.cmds

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1

13 May, 2017 10:45PM by Vincent Fourmond (noreply@blogger.com)

hackergotchi for Ricardo Mones

Ricardo Mones

Disabling "flat-volumes" in pulseaudio

Today I've just faced another of those happy ideas some people implements in software, which can be useful for some cases, but can also also be bad as default behaviour.

The problems caused were already posted to Debian mailing lists, fortunately, as well as its solution, which basically in a default Debian configuration means to:

$ sudo echo "flat-volumes = no" >> /etc/pulse/daemon.conf
$ pulseaudio -k && pulseaudio

And I think the default for Stretch should be set as above: raising volume to 100% just because of a system notification, while useful for some, it's not what common users expect.

13 May, 2017 03:12PM

May 12, 2017

hackergotchi for Steve McIntyre

Steve McIntyre

Fonts and presentations

When you're giving a presentation, the choice of font can matter a lot. Not just in terms of how pretty your slides look, but also in terms of whether the data you're presenting is actually properly legible. Unfortunately, far too many fonts are appallingly bad if you're trying to tell certain characters apart. Imagine if you're at the back of a room, trying to read information on a slide that's (typically) too small and (if you're unlucky) the presenter's speech is also unclear to you (noisy room, bad audio, different language). A good clear font is really important here.

To illustrate the problem, I've picked a few fonts available in Google Slides. I've written the characters "1lIoO0" (that's one, lower case L, upper case I, lower case o, upper case O, zero) in each of those fonts. Some of the sans-serif fonts in particular are comically bad for trying to distinguish between these characters.

font examples

It may not matter in all cases if your audience can read all the characters on your slides and tell them apart, put if you're trying to present scientific or numeric results it's critical. Please consider that before looking for a pretty font.

12 May, 2017 10:08PM

hackergotchi for Daniel Pocock

Daniel Pocock

Thank you to the OSCAL team

The welcome gift deserves its own blog post. If you want to know what is inside, I hope to see you at OSCAL'17.

12 May, 2017 01:26PM by Daniel.Pocock

hackergotchi for Martín Ferrari

Martín Ferrari

6 days to SunCamp

Only six more days to go before SunCamp! If you are still considering it, hurry up, you might still find cheap tickets for the low season.

It will be a small event (about 20-25 people), with a more intimate atmosphere than DebConf. There will be people fixing RC bugs, preparing stuff for after the release, or just discussing with other Debian folks.

There will be at least one presentation from a local project, and surely some members of nearby communities will join us for the day like they did last year.

See you all in Lloret!

Comment

12 May, 2017 10:21AM

hackergotchi for Daniel Pocock

Daniel Pocock

Kamailio World and FSFE team visit, Tirana arrival

This week I've been thrilled to be in Berlin for Kamailio World 2017, one of the highlights of the SIP, VoIP and telephony enthusiast's calendar. It is an event that reaches far beyond Kamailio and is well attended by leaders of many of the well known free software projects in this space.

HOMER 6 is coming

Alexandr Dubovikov gave me a sneak peek of the new version of the HOMER SIP capture framework for gathering, storing and analyzing messages in a SIP network.

exploring HOMER 6 with Alexandr Dubovikov at Kamailio World 2017

Visiting the FSFE team in Berlin

Having recently joined the FSFE's General Assembly as the fellowship representative, I've been keen to get to know more about the organization. My visit to the FSFE office involved a wide-ranging discussion with Erik Albers about the fellowship program and FSFE in general.

discussing the Fellowship program with Erik Albers

Steak and SDR night

After a hard day of SIP hacking and a long afternoon at Kamailio World's open bar, a developer needs a decent meal and something previously unseen to hack on. A group of us settled at Escados, Alexanderplatz where my SDR kit emerged from my bag and other Debian users found out how easy it is to apt install the packages, attach the dongle and explore the radio spectrum.

playing with SDR after dinner

Next stop OSCAL'17, Tirana

Having left Berlin, I'm now in Tirana, Albania where I'll give an SDR workshop and Free-RTC talk at OSCAL'17. The weather forecast is between 26 - 28 degrees celsius, the food is great and the weekend's schedule is full of interesting talks and workshops. The organizing team have already made me feel very welcome here, meeting me at the airport and leaving a very generous basket of gifts in my hotel room. OSCAL has emerged as a significant annual event in the free software world and if it's too late for you to come this year, don't miss it in 2018.

OSCAL'17 banner

12 May, 2017 09:48AM by Daniel.Pocock

hackergotchi for Norbert Preining

Norbert Preining

Gaisi Takeuti, 1926-2017

Two days ago one of the most influential logician of the 20th century has passed away, Gaisi Takeuti (竹内 外史). I had the pleasure to meet this excellent man, teacher, writer, thinker several times while he was the president of the Kurt Gödel Society.

I don’t want to recall his achievements in mathematical logic, in particular proof theory, because I am not worth to write about such a genius. I want to recall a few personal stories from my own experience.

I came into contact with Prof. Takeuti via is famous book Proof Theory, which my then Professor, now colleague and friend Matthias Baaz used for teaching us students proof theory. Together with Shoenfield’s Mathematical Logic these two books became the foundation of my whole logic education. Now again in print, back then the “Proof Theory” was a rare precious. Few prints did remain in the library, and over the years one by one disappeared, until the last copy we had access to was my copy where I had scribbled pages and pages of notes and proofs. Matthias later on used these copies for his lectures, I should have written on the back-side!

I remember well my first meeting with Prof. Takeuti: I was on the Conference on Internationalization in 2003 in Tsukuba, long before I moved to Japan. Back then I was just finishing my PhD and without much experience. When I arrived in the hotel, without fail there was a message of Prof. Takeuti inviting me for dinner the following day. We had dinner in a specialty restaurant of his area, together with is lovely wife. I was soo nervous about Japanese manners and stuttered Japanese phrases – just to be stopped by Prof. Takeuti pouring himself a glass of sake and telling me: Relax, and forget the rules and fill your own glass when you want to. I am well aware that this liberal attitude didn’t extend to Japanese colleagues, where he, descendant from a Samurai family, was at times very, extremely strict.

The dinner was decided upon already, not easy since I was still strict vegetarian back than (now I would have enjoyed the dinner much more!), but for the last course we could decide. I remember with a smile how Prof. Takeuti suggested in Japanese various sweets, just to be interrupted by his wife with “No Gaisi, no!”. I asked what is going on and she explained that he wants to order a Japanese sweet for me – I agreed, and that was probably the worst dish I had in Japan. Slippy noodles swimming in a cold broth, to be picked with chopsticks and put into a semi-sweet soja-sauce. I finished it, but it wasn’t good. I should have thought twice when Prof. Takeuti’s wife ordered a normal fruit salad.

Scientifically he was simply a genius – and famous for not reading a lot but reinventing everything. One of my research areas, Gödel logics, was reinvented by him as “Intuitionistic Fuzzy Logic” (for an overview see my talk at the Collegium Logicum 2016: Gödel Logics – a short survey). But I want to recall one of my favorite articles of him: “A Conservative Extension of Peano Arithmetic”. This was published as part 2 of Volume 17 of Publications of the Mathematical Society of Japan, retypeset pdf is available here, JSTOR page. Therein he develops classical (real and complex) analysis over Peano’s arithmetic. He shows that any arithmetical theorem proved in analytic number theory is a theorem in Peano’s arithmetic. The proof uses Gentzen’s cut elimination theorem, the center piece of modern proof theory.

With Georg Kreisel having passed away in 2015, and now Gaisi Takeuti, we loose two of the greatest, if not the greatest minds in logic.

12 May, 2017 12:59AM by Norbert Preining

May 11, 2017

Arturo Borrero González

Debunk some Debian myths

Debian CUSL 11

Debian has many years of history, about 25 years already. With such a long travel over the continous field of developing our Universal Operating System, some myths, false accusations and bad reputation has arisen.

Today I had the opportunity to discuss this topic, I was invited to give a Debian talk in the “11º Concurso Universitario de Software Libre”, a Spanish contest for students to develop and dig a bit into free-libre open source software (and hardware).

In this talk, I walked through some of the most common Debian myths, and I would like to summarize here some of them, with a short explanation of why I think they should be debunked.

Picture of the talk

myth #1: Debian is old software

Please, use testing or stable-backports. If you use Debian stable your system will in fact be stable and that means: updates contain no new software but only fixes.

myth #2: Debian is slow

We compile and build most of our packages with industry-standard compilers and options. I don’t see a significant difference on how fast linux kernel or mysql run in a CentOS or in Debian.

myth #3: Debian is difficult

I already discussed about this issue back in Jan 2017, Debian is a puzzle: difficult.

myth #4: Debian has no graphical environment

This is, simply put, false. We have gnome, kde, xfce and more. The basic Debian installer asks you what do you want at install time.

myth #5: since Debian isn’t commercial, the quality is poor

Did you know that most of our package developers are experts in their packages and in their upstream code? Not all, but most of them. Besides, many package developers get paid to do their Debian job. Also, there are external companies which do indeed offer support for Debian (see freexian for example).

myth #6: I don’t trust Debian

Why? Did we do something to gain this status? If so, please let us know. You don’t trust how we build or configure our packages? You don’t trust how we work? Anyway, I’m sorry, you have to trust someone if you want to use any kind of computer. Supervising every single bit of your computer isn’t practical for you. Please trust us, we do our best.

myth #7: nobody uses Debian

I don’t agree. Many people use Debian. They even run Debian in the International Space Station. Do you count derivatives, such as Ubuntu?

I believe this myth is just pointless, but some people out there really think nobody uses Debian.

myth #8: Debian uses systemd

Well, this is true. But you can run sysvinit if you want. I prefer and recommend systemd though :-)

myth #9: Debian is only for servers

No. See myths #1, #2 and #4.

You may download my slides in PDF and in ODP format (only in Spanish, sorry for English readers).

11 May, 2017 04:21PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Residential IPv6 stability

I run some Internet services on my home Internet connection, mostly for myself but also for friends and family. The IPv4 address assigned to my home by my ISP (currently: BT Internet) is dynamic and changes from time to time. To get around this, I make use of a "dynamic dns" service: essentially, a web service that updates a hostname whenever my IP changes.

Since sometime last year I have also had an IPv6 address for my home connection: In fact, lots of them. There are more IPv6 addresses assigned to my home than there are IPv4 addresses on the entire Internet: 4,722,366,482,869,645,213,696 compared to 4,294,967,296 IPv4 addresses for the entire world (of which 3,706,452,992 are usable).

I am relatively new to IPv6 (despite having played with it on and off since around the year 2000). I was curious to find out how stable the IPv6 addresses are, compared to the IPv4 one. It turns out that it's very stable: I've had four IPv4 addresses since February this year, but my IPv6 allocation has not changed.

11 May, 2017 04:16PM

hackergotchi for Norbert Preining

Norbert Preining

BachoTeX 2017

A week of typesetting, typography, bookbinding, bibliophily, not to forget long chats with good friends and loads of beer. That is BachoTeX, the best series of conferences I have ever been. This year BachoTeX was held for the 25th time, and was merged with the TUG Meeting for a firework of excellent presentations and long hours of brain storming, hacking, music making, dancing, and simply enjoying life!

Symphony of Green

And while it was a bit less relaxing for me than in the last years, mostly due to the presence of my little daughter who requested presence quite often, it is still the place to be during the Golden Week!

Of course I also gave a talk at BachoTeX about our latest changes in the upcoming TeX Live 2017 release: fmtutil and updmap – past & future changes (or: cleaning up the mess). Big thanks to my company Accelia Inc. for allowing me to attend the conference.

We arrived after a long trip, first via train and plane to Vienna, then two days break (including research with a colleague), followed by a night train ride to Warsaw and another train ride to Torun and a taxi ride to Bachotek. All in all far too long to be done with a 14 month old girl. Finally arrived we went directly to our hut and found it freezing. Fortunately we could organize a heater so that the rest of the week we didn’t have to live in 5-10 degrees 😉

Our log house in Bachotek

The second day brought already the traditional bonfire. After a small (for Polish standard) dinner we ignored the rain and met at the fireplace for BBQ, beer, and lots of live music.

Bonfire on the second evening

The rain stopped during the bonfire, probably due to my horrible singing, and the following days we were blessed with sunshine and warmer temperatures. The forest sparked in all kinds of greens.

Sunny days in Bachotek

For our daughter the trip was a great experience – lots of wild play grounds, many other kids, and a lake she really wanted to go swimming in. Normally I go swimming there, but this year I had a bad cold so I refrained from it, and with me also our daughter, to her great disappointment.

Ready to go swimming

Another day has passed, and the sunset lights up the beautiful lake Bachotek. I cannot imagine a better place for concentrated work paired with great relaxation!

Sunset along the lake

During the days the temperatures were really nice, but the mornings were cold, and our morning walk to the breakfast place was quite chilly.

Enjoying a fresh morning on the lake

Ample coffee breaks and lunch breaks left us enough time to discuss new developments. But the single most important thing that brought people to talk a lot was the horrible internet connection, a big plus in BachoTeX (but as far as rumors go it might have been the last time with that advantage!).

Beautiful and peaceful lake Bachotek

The last evening we had a banquet honoring 25 years of BachoTeX, one of the oldest TeX conference. Live music and dancing, lots of good food and drinks, and outside the perfect evening atmosphere.

Sunset around the lake

Too fast the time has passed and we had to return to Vienna, retracing the long trip. How far it may be, taking the burden to come from Japan is worth every drop of sweat, the time at BachoTeX is probably one of the most productive, and at the same time most relaxing for me.

Big thanks to all our Polish friends for organizing the event.

BachoTeX conference photo

(photo by Frans Goddijn)


11 May, 2017 01:37PM by Norbert Preining

hackergotchi for Jonathan Dowland

Jonathan Dowland

hackergotchi for Daniel Lange

Daniel Lange

Thunderbird startup hang (hint: Add-Ons)

If you see Thunderbird hanging during startup for a minute and then continuing to load fine, you are probably running into an issue similar to what I saw when Debian migrated Icedove back to the "official" Mozilla Thunderbird branding and changed ~/.icedove to ~/.thunderbird in the process (one symlinked to the other).

Looking at the console log (=start Thunderbird from a terminal so you see its messages), I got:

console.log: foxclocks.bootstrap._loadIntoWindow(): got xul-overlay-merged - waiting for overlay-loaded
[.. one minute delay ..]
console.log: foxclocks.bootstrap._windowListener(): got window load chrome://global/content/commonDialog.xul

Stracing confirms it hangs because Thunderbird loops waiting for a FUTEX until that apparently gets kicked by a XUL core timeout.
(Thanks for defensive programming folks!)

So in my case uninstalling the Add-On Foxclocks easily solved the problem.

I assume other Thunderbird Add-Ons may cause the same issue, hence the more generic description above.

11 May, 2017 08:00AM by Daniel Lange (nospam@example.com)

May 10, 2017

hackergotchi for Junichi Uekawa

Junichi Uekawa

I tried learning rust but making very little progress.

I tried learning rust but making very little progress.

10 May, 2017 10:58AM by Junichi Uekawa

May 09, 2017

hackergotchi for Clint Adams

Clint Adams

Four years

Posted on 2017-05-09
Tags: barks

09 May, 2017 08:45PM

hackergotchi for Matthew Garrett

Matthew Garrett

Intel AMT on wireless networks

More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.

I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:
  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.
Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.

If you're a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn't block client to client traffic, of course

comment count unavailable comments

09 May, 2017 08:18PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Surviving an “Eternal September:” How an Online Community Managed a Surge of Newcomers

Attracting newcomers is among the most widely studied problems in online community research. However, with all the attention paid to challenge of getting new users, much less research has studied the flip side of that coin: large influxes of newcomers can pose major problems as well!

The most widely known example of problems caused by an influx of newcomers into an online community occurred in Usenet. Every September, new university students connecting to the Internet for the first time would wreak havoc in the Usenet discussion forums. When AOL connected its users to the Usenet in 1994, it disrupted the community for so long that it became widely known as “The September that never ended”.

Our study considered a similar influx in NoSleep—an online community within Reddit where writers share original horror stories and readers comment and vote on them. With strict rules requiring that all members of the community suspend disbelief, NoSleep thrives off the fact that readers experience an immersive storytelling environment. Breaking the rules is as easy as questioning the truth of someone’s story. Socializing newcomers represents a major challenge for NoSleep.

Number of subscribers and moderators on /r/NoSleep over time.

On May 7th, 2014, NoSleep became a “default subreddit”—i.e., every new user to Reddit automatically joined NoSleep. After gradually accumulating roughly 240,000 members from 2010 to 2014, the NoSleep community grew to over 2 million subscribers in a year. That said, NoSleep appeared to largely hold things together. This reflects the major question that motivated our study: How did NoSleep withstand such a massive influx of newcomers without enduring their own Eternal September?

To answer this question, we interviewed a number of NoSleep participants, writers, moderators, and admins. After transcribing, coding, and analyzing the results, we proposed that NoSleep survived because of three inter-connected systems that helped protect the community’s norms and overall immersive environment.

First, there was a strong and organized team of moderators who enforced the rules no matter what. They recruited new moderators knowing the community’s population was going to surge. They utilized a private subreddit for NoSleep’s staff. They were able to socialize and educate new moderators effectively. Although issuing sanctions against community members was often difficult, our interviewees explained that NoSleep’s moderators were deeply committed and largely uncompromising.

That commitment resonates within the second system that protected NoSleep: regulation by normal community members. From our interviews, we found that the participants felt a shared sense of community that motivated them both to socialize newcomers themselves as well as to report inappropriate comments and downvote people who violate the community’s norms.

Finally, we found that the technological systems protected the community as well. For instance, post-throttling was instituted to limit the frequency at which a writer could post their stories. Additionally, Reddit’s “Automoderator”, a programmable AI bot, was used to issue sanctions against obvious norm violators while running in the background. Participants also pointed to the tools available to them—the report feature and voting system in particular—to explain how easy it was for them to report and regulate the community’s disruptors.

This blog post was written with Charlie Kiene. The paper and work this post describes is collaborative work with Charlie Kiene and Andrés Monroy-Hernández. The paper was published in the Proceedings of CHI 2016 and is released as open access so anyone can read the entire paper here. A version of this post was published on the Community Data Science Collective blog.

09 May, 2017 04:33PM by Benjamin Mako Hill

hackergotchi for Olivier Berger

Olivier Berger

Installing a Docker Swarm cluster inside VirtualBox with Docker Machine

I’ve documented the process of installing a Docker Swarm cluster inside VirtualBox with Docker Machine. This allows experimenting with Docker Swarm, the simple docker container orchestrator, over VirtualBox.

This allows you to play with orchestration scenarii without having to install docker on real machines.

Also, such an environment may be handy for teaching if you don’t want to install docker on the lab’s host. Installing the docker engine on Linux hosts for unprivileged users requires some care (refer to docs about securing Docker), as the default configuration may allow learners to easily gain root privileges (which may or not be desired).

See more at http://www-public.telecom-sudparis.eu/~berger_o/docker/install-docker-machine-virtualbox.html

09 May, 2017 12:02PM by Olivier Berger

Reproducible builds folks

Reproducible Builds: week 106 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 30 and Saturday May 6 2017:

Past and upcoming events

Between May 5th-7th the Reproducible Builds Hackathon 2017 took place in Hamburg, Germany.

On May 6th Mattia Rizzolo gave a talk on Reproducible Builds at DUCC-IT 17 in Vicenza, Italy.

On May 13th Chris Lamb will give a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania.

Media coverage

Toolchain development and fixes

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reviews of unreproducible packages

93 package reviews have been added, 12 have been updated and 98 have been removed in this week, adding to our knowledge about identified issues.

The following issues have been added:

2 issue types have been updated:

The following issues have been removed:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)

diffoscope development

strip-nondeterminism development


This week's edition was written by Chris Lamb, Holger Levsen and Ximin Luo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

09 May, 2017 08:53AM

hackergotchi for Martin Pitt

Martin Pitt

Cockpit is now just an apt install away

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it’s now a simple

$ sudo apt install cockpit

away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab docker.io from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn’t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu.

After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go.

What is Cockpit?

Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would.

Login screen System page

The left side bar is the equivalent of a “task switcher”, and the “applications” (i. e. modules for administering various aspects of your server) are run in parallel.

The main idea of Cockpit is that it should not behave “special” in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time.

It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not “Cockpit”.

Remote machines

In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between them on the top left, and everything works and feels exactly the same, including using the terminal widget:

Add remote Remote terminal

The Fedora 26 machine has some more Cockpit modules installed, including a lot of “playground” ones, thus you see a lot more menu entries there.

Under the hood

Beneath the fancy Patternfly/React/JavaScript user interface is the Cockpit API and protocol, which particularly fascinates me as a developer as that is what makes Cockpit so generic, reactive, and extensible. This API connects the worlds of the web, which speaks IPs and host names, ports, and JSON, to the “local host only” world of operating systems which speak D-Bus, command line programs, configuration files, and even use fancy techniques like passing file descriptors through Unix sockets. In an ideal world, all Operating System APIs would be remotable by themselves, but they aren’t.

This is where the “cockpit bridge” comes into play. It is a JSON (i. e. ASCII text) stream protocol that can control arbitrarily many “channels” to the target machine for reading, writing, and getting notifications. There are channel types for running programs, making D-Bus calls, reading/writing files, getting notified about file changes, and so on. Of course every channel can also act on a remote machine.

One can play with this protocol directly. E. g. this opens a (local) D-Bus channel named “d1” and gets a property from systemd’s hostnamed:

$ cockpit-bridge --interact=---

{ "command": "open", "channel": "d1", "payload": "dbus-json3", "name": "org.freedesktop.hostname1" }
---
d1
{ "call": [ "/org/freedesktop/hostname1", "org.freedesktop.DBus.Properties", "Get",
          [ "org.freedesktop.hostname1", "StaticHostname" ] ],
  "id": "hostname-prop" }
---

and it will reply with something like

d1
{"reply":[[{"t":"s","v":"donald"}]],"id":"hostname-prop"}
---

(“donald” is my laptop’s name). By adding additional parameters like host and passing credentials these can also be run remotely through logging in via ssh and running cockpit-bridge on the remote host.

Stef Walter explains this in detail in a blog post about Web Access to System APIs. Of course Cockpit plugins (both internal and third-party) don’t directly speak this, but use a nice JavaScript API.

As a simple example how to create your own Cockpit plugin that uses this API you can look at my schroot plugin proof of concept which I hacked together at DevConf.cz in about an hour during the Cockpit workshop. Note that I never before wrote any JavaScript and I didn’t put any effort into design whatsoever, but it does work ☺.

Next steps

Cockpit aims at servers and getting third-party plugins for talking to your favourite part of the system, which means we really want it to be available in Debian testing and stable, and Ubuntu LTS. Our CI runs integration tests on all of these, so each and every change that goes in is certified to work on Debian 8 (jessie) and Ubuntu 16.04 LTS, for example. But I’d like to replace the external PPA/repository on the Install instructions with just “it’s readily available in -backports”!

Unfortunately there’s some procedural blockers there, the Ubuntu backport request suffers from understaffing, and the Debian stable backport is blocked on getting it in to testing first, which in turn is blocked by the freeze. I will soon ask for a freeze exception into testing, after all it’s just about zero risk - it’s a new leaf package in testing.

Have fun playing around with it, and please report bugs!

Feel free to discuss and ask questions on the Google+ post.

09 May, 2017 08:51AM

May 08, 2017

Bits from Debian

Bursary applications for DebConf17 are closing in 48 hours!

This is a final reminder: if you intend to apply for a DebConf17 bursary and have not yet done so, please proceed as soon as possible.

Bursary applications for DebConf17 will be accepted until May 10th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

Note: For DebCamp we only have on-site accommodation available. The option chosen in the registration system will only be for the DebConf period (August 5 to 12).

See you in Montréal!

DebConf17 logo

08 May, 2017 08:30PM by Nicolas Dandrimont for the DebConf Team

hackergotchi for Daniel Pocock

Daniel Pocock

Visiting Kamailio World (Sold Out) and OSCAL'17

This week I'm visiting Kamailio World (8-10 May, Berlin) and OSCAL'17 (13-14 May, Tirana).

Kamailio World

Kamailio World features a range of talks about developing and using SIP and telephony applications and offers many opportunities for SIP developers, WebRTC developers, network operators and users to interact. Wednesday, at midday, there is a Dangerous Demos session where cutting edge innovations will make their first (and potentially last) appearance.

Daniel Pocock and Daniel-Constantin Mierla at Kamailio World, Berlin, 2017

OSCAL'17, Tirana

OSCAL'17 is an event that has grown dramatically in recent years and is expecting hundreds of free software users and developers, including many international guests, to converge on Tirana, Albania this weekend.

On Saturday I'll be giving a workshop about the Debian Hams project and Software Defined Radio. On Sunday I'll give a talk about Free Real-time Communications (RTC) and the alternatives to systems like Skype, Whatsapp, Viber and Facebook.

08 May, 2017 02:50PM by Daniel.Pocock

hackergotchi for Lars Wirzenius

Lars Wirzenius

Ick2 design discussion

Recently, Daniel visited us in Helsinki. In addition to enjoying local food and scenerey, we spent some time together in front of a whiteboard to sketch out designs for Ick2. Ick is my continuous integration system, and it's all Daniel's fault for suggesting the name. Ahem.

I am currently using the first generation of Ick and it is a rigid, cumbersome, and fragile thing. It works well enough that I don't miss Jenkins, but I would like something better. That's the second generation of Ick, or Ick2, and that's what we discussed with Daniel.

Where pretty much everything in Ick1 is hardcoded, everything in Ick2 will be user-configurable. It's my last, best chance to go completely overboard in the second system syndrome manner.

Where Ick1 was written in a feverish two-week hacking session, rushed because my Jenkins install at the time had broken one time too many, we're taking our time with Ick2. Slow and careful is the tune this time around.

Our "minimum viable product" or MVP for Ick2 is defined like this:

Ick2 builds static websites from source in a git repository, using ikiwiki, and published to a web server using rsync. A change to the git repository triggers a new build. It can handle many separate websites, and if given enough worker machines, can build many of them concurrently.

This is a real task, and something we already do with Ick1 at work. It's a reasonable first step for the new program.

Some decisions we made:

  • The Ick2 controller, which decides which projects to build, and what's the next build step at any one time, will be reactive only. It will do nothing except in response to an HTTP API request. This includes things like timed events. An external service will need to poke the controller at the right time.

  • The controller will be accompanied by worker manager processes, which fetch instructions of what to do next, and control actual worker over ssh.

  • Provisioning of the workers is out of scope for the MVP. For the MVP we are OK with a static list of workers. In the future we might make worker registration be a dynamic things, but not for the MVP. (Parts or all of this decision may be changed in the future, but we need to start somewhere.)

  • The MVP publishing will happen by running rsync to a web server. Providing credentials for the workers to do that is the sysadmin's problem, not something the MVP will handle itself.

  • The MVP needs to handle more than one worker, and more than one pipelines, and needs to build things concurrently when there's call for it.

  • The MVP will need to read the pipelines (and their steps and any other info) from YAML config files, and can't have that stuff hardcoded.

  • The MVP API will have no authentication or authorization stuff yet.

The initial pipelines will be basically like this, but expressed in some way by the user:

  1. Clone the source repoistory.
  2. Run ikiwiki --build to build the website.
  3. Run rsync to publish the website on a server.

Assumptions:

  • Every worker can clone from the git server.
  • Every worker has all the build tools.
  • Every worker has rsync and access to every web server.
  • Every pipeline run is clean.

Actions the Ick2 controller API needs to support:

  • List all existing projects.
  • Trigger a project to build.
  • Query what project builds are running.
  • Get build logs for a project: current log (from the running build), and the most recent finished build.

A sketch API:

  • POST /projects/foo/+trigger

    Trigger build of project foo. If the git hasn't changed, the build runs anyway.

  • GET /projects

    List names of all projects.

  • GET /projects/foo

    On second thought, I can't think of anything useful for this to return for the MVP. Scratch.

  • GET /projects/foo/logs/current

    Return entire known build log captured so far for the currently running build.

  • GET /projects/foo/logs/previous

    Return entire build log for latest finished build.

  • GET /work/bar

    Used by worker bar: return next not-yet-finished step to run as a JSON object containing fields "project" (name of project for which to run the step) and "shell" (a shell command to run). The call will return the same JSON object until the worker reports it as having finished.

  • POST /work/bar/snippet

    Used by worker bar to report progress on the currently running step: a JSON object containing fields "stdout" (string with output from the shell command's stdout), "stderr" (ditto but stderr), and "exit_code" (the shell command's exit code, if it's finished, or null).

Sequence:

  • Git server has a hook that calls "GET /projects/foo/+trigger" (or else this is simulated by user).

  • Controller add a build of project foo to queue.

  • Worker manager calls "GET /work/bar", gets a shell command to run, and starts running it on its worker.

  • While worker runs shell command, every second or so, worker manager calls "POST /work/bar/snippet" to report progress including collected output, if any.

  • Controller responds with OK or KILL, and if the latter, worker kills the command it is running. Worker manager continues reporting progress via snippet until shell command is finished (on its own or by having been killed).

  • Controller appends any output reported via .../snippet. When it learns a shell command has finished, it updates its idea of the next step to run.

  • When controller learns a project has finished building, it rotates the current build log to be the previous one.

The next step will probably be to sketch a yarn test suite of the API and implement a rudimentary one.

08 May, 2017 10:04AM

Mike Gabriel

[Arctica Project] Release of nx-libs (version 3.5.99.7)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Friday, May 5th 2017, version 3.5.99.7 of nx-libs has been released [1].

Credits

A special thanks goes to Ulrich Sibiller for tracking down a regression bug that caused a tremendously slowed down keyboard input on high latency connections. Thanks for that!

Another thanks goes to the Debian project for indirectly providing us with so many build platforms. We are nearly at the point where nx-libs builds on all architectures supported by the Debian project. (Runtime stability is a completely different issue, we will get to this soon).

Changes between 3.5.99.6 and 3.5.99.7

  • Include Debian patches, re-introducing GNU/Hurd and GNU/kFreeBSD support. Thanks to various porters on #debian-ports and #debian-hurd for feedback (esp. Paul Wise, James Clark, and Samuel Thibault).
  • Revert "Switch from using libNX_X11's deprecated XKeycodeToKeysym() function to using XGetKeyboardMapping()."
  • Mark XKeycodeToKeysym() function as not deprecated in libNX_X11 as using XGetKeyboardMapping() (as suggested by X.org devs) in nxagent is no option for us. XGetKeyboardMapping() simply creates far too many round trips for our taste.

Change Log

The complete list of changes (since 3.5.99.6) can be obtained from here.

Known Issues

A list of known issues can be obtained from the nx-libs issue tracker [issues].

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). The nxagent Xserver can be used from remote sessions (via nxcomp compression library) or as a next Xserver.

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. At the moment, you can obtain nx-libs builds for Ubuntu 16.10 (yakkety) and 17.04 (zenial) as nightly builds.

References

08 May, 2017 08:40AM by sunweaver