September 29, 2017

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Tor Metrics Team Meeting in Berlin

We had a meeting of the Metrics Team in Berlin yesterday to organise a roadmap for the next 12 months. This roadmap isn’t yet finalised as it will now be taken to the main Tor developers meeting in Montreal where perhaps there are things we thought were needed but aren’t, or things that we had forgotten. Still we have a pretty good draft and we were all quite happy with it.

We have updated tickets in the Metrics component on the Tor trac to include either “metrics-2017“ or “metrics-2018“ in the keywords field to identify tickets that we expect to be able to resolve either by the end of this year or by the end of next year (again, not yet finalised but should give a good idea). In some cases this may mean closing the ticket without fixing it, but only if we believe that either the ticket is out of scope for the metrics team or that it’s an old ticket and no one else has had the same issue since.

Having an in-person meeting has allowed us to have easy discussion around some of the more complex tickets that have been sitting around. In many cases these are tickets where we need input from other teams, or perhaps even just reassigning the ticket to another team, but without a clear plan we couldn’t do this.

My work for the remainder of the year will be primarily on Atlas where we have a clear plan for integrating with the Tor Metrics website, and may include some other small things relating to the website.

I will also be triaging the current Compass tickets as we look to shut down compass and integrate the functionality into Atlas. Compass specific tickets will be closed but some tickets relating to desirable functionality may be moved to Atlas with the fix implemented there instead.

29 September, 2017 02:00PM

Sven Hoexter

Last rites to the lyx and elyxer packaging

After having been a heavy LyX user from 2005 to 2010 I've continued to maintain LyX more or less till now. Finally I'm starting to leave that stage and removed myself from the Uploaders list. The upload with some other last packaging changes is currently sitting in the git repo. Mainly because lintian on ftp-master currently rejects 'pakagename@packages.d.o' maintainer addresses (the alternative to the lists.alioth.d.o maintainer mailinglists). For elyxer I filled a request for removal. It hasn't seen any upstream activity for a while and the LyX build in HTML export support improved.

My hope is that if I step away far enough someone else might actually pick it up. I had this strange moment when I lately realized that xchat got reintroduced to Debian after mapreri and myself spent some time last year to get it removed before the stretch release.

29 September, 2017 10:39AM

Petter Reinholdtsen

Visualizing GSM radio chatter using gr-gsm and Hopglass

Every mobile phone announce its existence over radio to the nearby mobile cell towers. And this radio chatter is available for anyone with a radio receiver capable of receiving them. Details about the mobile phones with very good accuracy is of course collected by the phone companies, but this is not the topic of this blog post. The mobile phone radio chatter make it possible to figure out when a cell phone is nearby, as it include the SIM card ID (IMSI). By paying attention over time, one can see when a phone arrive and when it leave an area. I believe it would be nice to make this information more available to the general public, to make more people aware of how their phones are announcing their whereabouts to anyone that care to listen.

I am very happy to report that we managed to get something visualizing this information up and running for Oslo Skaperfestival 2017 (Oslo Makers Festival) taking place today and tomorrow at Deichmanske library. The solution is based on the simple recipe for listening to GSM chatter I posted a few days ago, and will show up at the stand of Åpen Sone from the Computer Science department of the University of Oslo. The presentation will show the nearby mobile phones (aka IMSIs) as dots in a web browser graph, with lines to the dot representing mobile base station it is talking to. It was working in the lab yesterday, and was moved into place this morning.

We set up a fairly powerful desktop machine using Debian Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers connected and visualize the visible cell phone towers using an English version of Hopglass. A fairly powerfull machine is needed as the grgsm_livemon_headless processes from gr-gsm converting the radio signal to data packages is quite CPU intensive.

The frequencies to listen to, are identified using a slightly patched scan-and-livemon (to set the --args values for each receiver), and the Hopglass data is generated using the patches in my meshviewer-output branch. For some reason we could not get more than four SDRs working. There is also a geographical map trying to show the location of the base stations, but I believe their coordinates are hardcoded to some random location in Germany, I believe. The code should be replaced with code to look up location in a text file, a sqlite database or one of the online databases mentioned in the github issue for the topic.

If this sound interesting, visit the stand at the festival!

29 September, 2017 08:30AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.13: Updated vignettes, and more

The thirteenth release in the 0.12.* series of Rcpp landed on CRAN this morning, following a little delay because Uwe Ligges was traveling and whatnot. We had announced its availability to the mailing list late last week. As usual, a rather substantial amount of testing effort went into this release so you should not expect any surprise.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, and the 0.12.12 release in July 2017 making it the seventeeth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1069 packages (and hence 73 more since the last release) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contains a large-ish update to the documentation as all vignettes (apart from the unit test one, which is a one-off) now use Markdown and the (still pretty new) pinp package by James and myself. There is also a new vignette corresponding to the PeerJ preprint James and I produced as an updated and current Introduction to Rcpp replacing the older JSS piece (which is still included as a vignette too).

A few other things got fixed: Dan is working on const iterators you would expect with modern C++, Lei Yu spotted error in Modules, and more. See below for details.

Changes in Rcpp version 0.12.13 (2017-09-24)

  • Changes in Rcpp API:

    • New const iterators functions cbegin() and cend() have been added to several vector and matrix classes (Dan Dillon and James Balamuta in #748) starting to address #741).
  • Changes in Rcpp Modules:

    • Misplacement of one parenthesis in macro LOAD_RCPP_MODULE was corrected (Lei Yu in #737)
  • Changes in Rcpp Documentation:

    • Rewrote the macOS sections to depend on official documentation due to large changes in the macOS toolchain. (James Balamuta in #742 addressing issue #682).

    • Added a new vignette ‘Rcpp-introduction’ based on new PeerJ preprint, renamed existing introduction to ‘Rcpp-jss-2011’.

    • Transitioned all vignettes to the 'pinp' RMarkdown template (James Balamuta and Dirk Eddelbuettel in #755 addressing issue #604).

    • Added an entry on running 'compileAttributes()' twice to the Rcpp-FAQ (##745).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 September, 2017 01:31AM

September 28, 2017

Enrico Zini

Systemd path units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.path units

This kind of unit can be used to monitor a file or directory for changes using inotify, and activate other units when an event happens.

For example, this activates a unit that manages a spool directory, which activates another unit whenever a .pdf file is added to /tmp/spool/:

[Unit]
Description=Monitor /tmp/spool/ for new .pdf files

[Path]
Unit=beeponce.service
PathExistsGlob=/tmp/spool/*.pdf
MakeDirectory=true

This instead activates another unit whenever /tmp/ready is changed, for example by someone running touch /tmp/ready:

[Unit]
Description=Monitor /tmp/ready

[Path]
Unit=beeponce.service
PathChanged=/tmp/ready

And beeponce.service:

[Unit]
Description=Beeps once

[Service]
Type=oneshot
ExecStart=/usr/bin/aplay /tmp/beep.wav

See man systemd.path

28 September, 2017 10:00PM

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy 4.1.1.0 released

I just released Debian Policy version 4.1.1.0.

There are only two normative changes, and neither is very important. The main thing is that this upload fixes a lot of packaging bugs that were found since we converted to build with Sphinx.

There are still some issues remaining; I hope to submit some patches to the www-team’s scripts to fix those.

28 September, 2017 08:35PM

hackergotchi for Ricardo Mones

Ricardo Mones

Long time no post

Seems the breakage of my desktop computer more than 3 months ago did also caused also a hiatus on my online publishing activities... it was not really intended, it happened I was just busy with other things ಠ_ಠ.

With a broken computer being able to build software on the laptop became a priority. Around September 2016 or so the good'n'old black MacBook decided to stop working. I didn't really need a replacement by that time, but never liked to have just a single working system, and in October just found an offer which I could not resist and bought a ThinkPad X260. It helped to build my final project (it was faster than the desktop), but lacking time for FOSS hadn't used it for much more.

Setting up the laptop for software (Debian packages and Claws Mail, mainly) was somewhat easy. Finding a replacement for the broken desktop was a bit more difficult. I considered a lot of configurations and prices, from those new Ryzen to just buying the same components (pretty difficult now because they're discontinued). In the end, I decided to spend the minimum and make good use of everything else still working (memory, discs and wireless card), so I finally got an AMD A10-7860K on top of an Asus A88M-PLUS. This board has more SATA ports, so I added an unused SSD, remains of a broken laptop, to install the new system —Debian Stretch, of course ʘ‿ʘ— while keeping the existing software RAID partitions of the spinning drives.


The last thing distracting from the usual routine was replacing the car. Our child is growing as expected and the Fiesta was starting to appear small and uncomfortable, specially for long distance travel. We went for an hybrid model, with a high capacity boot. Given our budget, we only found 3 models below the limit: Kia Niro, Hyundai Ioniq and Toyota Auris TS. The color was decided by the kid (after forbidding black), and this was the winner...

In the middle of all of this we also took some vacation to travel to the south of Galicia, mostly around Vigo area, but also visiting Oporto and other nice places.

28 September, 2017 07:31PM

Matthias Klumpp

Adding fonts to software centers

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren’t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already 🙂 .

Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font.

Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. “Lato Regular Italic” or “Lato Bold”. A user however will see the font family as a font, e.g. just “Lato” instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream “font” components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a

<provides/>
 -tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this.

How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in

/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
  for the AppStream metadata generator to pick up:

<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>

  <name>Lato</name>
  <summary>A sanserif type­face fam­ily</summary>
  <description>
    <p>
      Lato is a sanserif type­face fam­ily designed in the Sum­mer 2010 by Warsaw-based designer
      Łukasz Dziedzic (“Lato” means “Sum­mer” in Pol­ish). In Decem­ber 2010 the Lato fam­ily
      was pub­lished under the open-source Open Font License by his foundry tyPoland, with
      sup­port from Google.
    </p>
  </description>

  <url type="homepage">http://www.latofonts.com/</url>

  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>

When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an “icon” for the respective font component using its regular typeface. Of course that is not ideal – what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display?

This behavior can be influenced by adding

<font/>
  tags to a
<provides/>
  tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
  list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag.

The example lines of text are written in a language matching the font using Pango.

But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font.

For example, a font with mathematical symbols might want to add the following to its metainfo file:

<custom>
  <value key="FontIconText">∑√</value>
  <value key="FontSampleText">∑ ∮ √ ‖...‖ ⊕ 𝔼 ℕ ⋉</value>
</custom>

Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts.

So, In summary:

  • Fonts are hard
  • I need to blog faster
  • Please add metainfo files to your fonts and submit them upstream if you can!
  • Fonts must have a metainfo file in order to show up in GNOME Software, KDE Discover, AppCenter, etc.
  • The “new” font specification is backwards compatible to Richard’s pioneer work in 2014
  • The appstream-generator supports a few non-standard values to influence how font images are rendered that you might be interested in (maybe we can do something like that for appstream-builder as well)
  • The appstream-generator does not (yet?) support the <extends/> logic Richard outlined in his blog post, mainly because it wasn’t necessary in Debian/Ubuntu/Arch yet (which is asgen’s primary audience), and upstream projects would rarely want to write multiple metainfo files.
  • The metaInfo files are not supposed to replace the existing fontconfig files, and we can not generate them from existing metadata, sadly
  • If you want a more detailed look at writing font metainfo files, take a look at the AppStream specification.
  • Please write more font metadata 😉

 

28 September, 2017 02:24PM by Matthias

Russell Coker

Process Monitoring

Since forking the Mon project to etbemon [1] I’ve been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I’m about to redesign.

Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.

For people who don’t use mon, the monitor scripts return 0 if everything is OK and 1 if there’s a problem along with using stdout to display an error message. While I’m not aware of anyone hooking mon scripts into a different monitoring system that’s going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.

Basic Monitoring

ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2

I’m currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the “master” process in this instance refers to the main process of Postfix, but other daemons use the same process name (it’s one of those names that’s wrong because it’s so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.

The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post – merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won’t be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string “SSH-2.0-OpenSSH_”. Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.

In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix “master” process is running regardless of what other “master” processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that “master” isn’t running I don’t need to fully wake up to know where the problem is.

SE Linux

One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I’m not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.

Transient Processes

Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when “logrotate” or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the “alertafter 2” directive. The “failure_interval” directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn’t delay the notification much.

To deal with this I’ve been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.

CPU Use

Mon currently has a loadavg.monitor script that to check the load average. But that won’t catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won’t catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it’s lifetime or over the last few seconds unless it’s in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).

Monitoring for Exclusion

A common programming mistake is to call setuid() before setgid() which means that the program doesn’t have permission to call setgid(). If return codes aren’t checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn’t show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.

On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn’t happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.

Automated tests for configuration errors that might impact system security is a bigger issue, I’ll probably write a separate blog post about it.

28 September, 2017 01:46PM by etbe

Lior Kaplan

LibreOffice community celebrates 7th anniversary

The Document foundation blog have a post about LibreOffice 7th anniversary:

Berlin, September 28, 2017 – Today, the LibreOffice community celebrates the 7th anniversary of the leading free office suite, adopted by millions of users in every continent. Since 2010, there have been 14 major releases and dozens of minor ones, fulfilling the personal productivity needs of both individuals and enterprises, on Linux, macOS and Windows.

I wanted to take a moment to remind people that 7 years ago the community decided to make the de facto fork of OpenOffice.org official after life under Sun (and then Oracle) were problematic. From the very first hours the project showed its effectiveness. See my post about LibreOffice first steps. Not to mention what it achieved in the past 7 years.

This is still one of my favourite open source contributions, not because it was sophisticated or hard, but because it as about using the freedom part of the free software:
Replace hardcoded “product by Oracle” with “product by %OOOVENDOR”.

On a personal note, for me, after years of trying to help with OOo l10n for Hebrew and RTL support, things started to go forward in a reasonable pace, getting patches in after years of trying, having upstream fix some of the issues, and actually able doing the translation. We made it to 100% with LibreOffice 3.5.0 in February 2012 (something we should redo soon…).


Filed under: i18n & l10n, Israeli Community, LibreOffice

28 September, 2017 12:52PM by Kaplan

Russ Allbery

Review: The Seventh Bride

Review: The Seventh Bride, by T. Kingfisher

Publisher: 47North
Copyright: 2015
ISBN: 1-5039-4975-3
Format: Kindle
Pages: 225

There are two editions of this book, although only one currently for sale. This review is of the second edition, released in November of 2015. T. Kingfisher is a pen name for Ursula Vernon when she's writing for adults.

Rhea is a miller's daughter. She's fifteen, obedient, wary of swans, respectful to her parents, and engaged to Lord Crevan. The last was a recent and entirely unexpected development. It's not that she didn't expect to get married eventually, since of course that's what one does. And it's not that Lord Crevan was a stranger, since that's often how it went with marriage for people like her. But she wasn't expecting to get married now, and it was not at all clear why Lord Crevan would want to marry her in particular.

Also, something felt not right about the entire thing. And it didn't start feeling any better when she finally met Lord Crevan for the first time, some days after the proposal to her parents. The decidedly non-romantic hand kissing didn't help, nor did the smug smile. But it's not like she had any choice. The miller's daughter doesn't say no to a lord and a friend of the viscount. The miller's family certainly doesn't say no when they're having trouble paying the bills, the viscount owns the mill, and they could be turned out of their livelihood at a whim.

They still can't say no when Lord Crevan orders Rhea to come to his house in the middle of the night down a road that quite certainly doesn't exist during the day, even though that's very much not the sort of thing that is normally done. Particularly before the marriage. Friends of the viscount who are also sorcerers can get away with quite a lot. But Lord Crevan will discover that there's still a limit to how far he can order Rhea around, and practical-minded miller's daughters can make a lot of unexpected friends even in dire circumstances.

The Seventh Bride is another entry in T. Kingfisher's series of retold fairy tales, although the fairy tale in question is less clear than with The Raven and the Reindeer. Kirkus says it's a retelling of Bluebeard, but I still don't quite see that in the story. I think one could argue equally easily that it's an original story. Nonetheless, it is a fairy tale: it has that fairy tale mix of magical danger and practical morality, and it's about courage and friendships and their consequences.

It also has a hedgehog.

This is an T. Kingfisher story, so it's packed full of bits of marvelous phrasing that I want to read over and over again. It has wonderful characters, the hedgehog among them, and it has, at its heart, a sort of foundational decency and stubborn goodness that's deeply satisfying for the reader.

The Seventh Bride is a lot closer to horror than the other T. Kingfisher books I've read, but it never fell into my dislike of the horror genre, despite a few gruesome bits. I think that's because neither Rhea nor the narrator treat the horrific aspects as representative of the true shape of the world. Rhea instead confronts them with a stubborn determination and an attempt to make the best of each moment, and with a practical self-awareness that I loved reading about.

The problem with crying in the woods, by the side of a white road that leads somewhere terrible, is that the reason for crying isn't inside your head. You have a perfectly legitimate and pressing reason for crying, and it will still be there in five minutes, except that your throat will be raw and your eyes will itch and absolutely nothing else will have changed.

Lord Crevan, when Rhea finally reaches him, toys with her by giving her progressively more horrible puzzle tasks, threatening her with the promised marriage if she fails at any of them. The way this part of the book finally resolves is one of the best moments I've read in any book. Kingfisher captures an aspect of moral decisions, and a way in which evil doesn't work the way that evil people expect it to work, that I can't remember seeing an author capture this well.

There are a lot of things here for Rhea to untangle: the nature of Crevan's power, her unexpected allies in his manor, why he proposed marriage to her, and of course how escape his power. The plot works, but I don't think it was the best part of the book, and it tends to happen to Rhea rather than being driven by her. But I have rarely read a book quite this confident of its moral center, or quite as justified in that confidence.

I am definitely reading everything Vernon has published under the T. Kingfisher name, and quite possibly most of her children's books as well. Recommended, particularly if you liked the excerpt above. There's an entire book full of paragraphs like that waiting for you.

Rating: 8 out of 10

28 September, 2017 04:41AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppZiggurat 0.1.4

ziggurats

A maintenance release of RcppZiggurat is now on the CRAN network for R. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

The NEWS file entry below lists all changes.

Changes in version 0.1.4 (2017-07-27)

  • The vignette now uses the pinp package in two-column mode.

  • Dynamic symbol registration is now enabled.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 September, 2017 02:06AM

September 27, 2017

Enrico Zini

Systemd device units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.device units

Several devices are automatically represented inside systemd by .device units, which can be used to activate services when a given device exists in the file system.

See systemctl --all --full -t device to see a list of all decives for which systemd has a unit in your system.

For example, this .service unit plays a sound as long as a specific USB key is plugged in my system:

[Unit]
Description=Beeps while a USB key is plugged
DefaultDependencies=false
StopWhenUnneeded=true

[Install]
WantedBy=dev-disk-by\x2dlabel-ERLUG.device

[Service]
Type=simple
ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done'

If you need to work with a device not seen by default by systemd, you can add a udev rule that makes it available, by adding the systemd tag to the device with TAG+="systemd".

It is also possible to give the device an extra alias using ENV{SYSTEMD_ALIAS}="/dev/my-alias-name".

To figure out all you can use for matching a device:

  1. Run udevadm monitor --environment and plug the device
  2. Look at the DEVNAME= values and pick one that addresses your device the way you prefer
  3. udevadm info --attribute-walk --name=*the value of devname* will give you all you can use for matching in the udev rule.

See:

27 September, 2017 10:00PM

Qt cross-architecture development in Debian

Use case: use Debian Stable as an environment to run amd64 development machines to develop Qt applications for Raspberry Pi or other smallish armhf devices.

Qt Creator is used as Integrated Development Environment, and it supports cross-compiling, running the built source on the target system, and remote debugging.

Debian Stable (vanilla or Raspbian) runs on both the host and the target systems, so libraries can be kept in sync, and both systems have access to a vast amount of libraries, with security support.

On top of that, armhf libraries can be installed with multiarch also in the host machine, so cross-builders have access to the exact same libraries as the target system.

This sounds like a dream system. But. We're not quite there yet.

cross-compile attempts

I tried cross compiling a few packages:

$ sudo debootstrap stretch cross
$ echo "strech_cross" | sudo tee cross/etc/debian_chroot
$ sudo systemd-nspawn -D cross
# dpkg --add-architecture armhf
# echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
# apt update
# apt install --no-install-recommends build-essential crossbuild-essential-armhf

Some packages work:

# apt source bc
# cd bc-1.06.95/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
dh_auto_configure -- --prefix=/usr --with-readline
        ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=\${prefix}/include --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=\${prefix}/lib/arm-linux-gnueabihf --libexecdir=\${prefix}/lib/arm-linux-gnueabihf --disable-maintainer-mode --disable-dependency-tracking --host=arm-linux-gnueabihf --prefix=/usr --with-readline
…
dpkg-deb: building package 'dc-dbgsym' in '../dc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc-dbgsym' in '../bc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'dc' in '../dc_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc' in '../bc_1.06.95-9_armhf.deb'.
 dpkg-genbuildinfo --build=binary
 dpkg-genchanges --build=binary >../bc_1.06.95-9_armhf.changes
dpkg-genchanges: info: binary-only upload (no source code included)
 dpkg-source --after-build bc-1.06.95
dpkg-buildpackage: info: binary-only upload (no source included)

With qmake based Qt packages, qmake is not configured for cross-building, probably because it is not currently supported:

# apt source pumpa
# cd pumpa-0.9.3/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
        qmake -makefile -nocache "QMAKE_CFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=.
          -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_LFLAGS_RELEASE=-Wl,-z,relro -Wl,-z,now"
          "QMAKE_LFLAGS_DEBUG=-Wl,-z,relro -Wl,-z,now" QMAKE_STRIP=: PREFIX=/usr
qmake: could not exec '/usr/lib/x86_64-linux-gnu/qt5/bin/qmake': No such file or directory
…
debian/rules:19: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2

With cmake based Qt packages it goes a little better in that it finds the cross compiler, pkg-config and some multiarch paths, but then it tries to run armhf moc, which fails:

# apt source caneda
# cd caneda-0.3.0/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
        cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None
          -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SYSTEM_NAME=Linux
          -DCMAKE_SYSTEM_PROCESSOR=arm -DCMAKE_C_COMPILER=arm-linux-gnueabihf-gcc
          -DCMAKE_CXX_COMPILER=arm-linux-gnueabihf-g\+\+
          -DPKG_CONFIG_EXECUTABLE=/usr/bin/arm-linux-gnueabihf-pkg-config
          -DCMAKE_INSTALL_LIBDIR=lib/arm-linux-gnueabihf
…
CMake Error at /usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfig.cmake:27 (message):
  The imported target "Qt5::Core" references the file

     "/usr/lib/arm-linux-gnueabihf/qt5/bin/moc"

  but this file does not exist.  Possible reasons include:

  * The file was deleted, renamed, or moved to another location.

  * An install or uninstall procedure did not complete successfully.

  * The installation package was faulty and contained

     "/usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfigExtras.cmake"

  but not all the files it references.

Note: Although I improvised a chroot to be able to fool around with it, I would use pbuilder or sbuild to do the actual builds.

Helmut suggests pbuilder --host-arch or sbuild --host.

Doing it the non-Debian way

This guide in the meantime explains how to set up a cross-compiling Qt toolchain in a rather dirty way, by recompiling Qt pointing it at pieces of the Qt deployed on the Raspberry Pi.

Following that guide, replacing the CROSS_COMPILE value with /usr/bin/arm-linux-gnueabihf- gave me a working qtbase, for which it is easy to create a Kit for Qt Creator that works, and supports linking applications with Debian development packages that do not use Qt.

However, at that point I need to recompile all dependencies that use Qt myself, and I quickly got stuck at that monster of QtWebEngine, whose sources embed the whole of Chromium.

Having a Qt based development environment in which I need to become the maintainer for the whole Qt toolchain is not a product I can offer to a customer. Cross compiling qmake based packages on stretch is not currently supported, so at the moment I had to suggest to postpone all plans for total world domination for at least two years.

Cross-building Debian

In the meantime, Helmut Grohne has been putting a lot of effort into making Debian packages cross-buildable:

helmut> enrico: yes, cross building is painful. we have ~26000 source packages. of those, ~13000 build arch-dep packages. of those, ~6000 have cross-satisfiable build-depends. of those, I tried cross building ~2300. of those 1300 cross built. so we are at about 10% working.

helmut> enrico: plus there are some 607 source packages affected by some 326 bugs with patches.

helmut> enrico: gogo nmu them

helmut> enrico: I've filed some 1000 bugs (most of them with patches) now. around 600 are fixed :)

He is doing it mostly alone, and I would like people not to be alone when they do a lot of work in Debian, so…

Join Helmut in the effort of making Debian cross-buildable!

Build any Debian package for any device right from the comfort of your own work computer!

Have a single development environment seamlessly spanning architecture boundaries, with the power of all that there is in Debian!

Join Helmut in the effort of making Debian cross-buildable!

Apply here, or join #debian-bootstrap on OFTC!

Cross-building Qt in Debian

mitya57 summarised the situation on the KDE team side:

mitya57> we have cross-building stuff on our TODO list, but it will likely require a lot of time and neither Lisandro nor I have it currently.

mitya57> see https://gobby.debian.org/export/Teams/KDE/qt-cross for a summary of what needs to be done.

mitya57> Any help or patches are always welcome :))

qemu-user-static

Helmut also suggested to use qemu-user-static to make the host system able to run binaries compiled for the target system, so that even if a non-cross-compiling Qt build tries to run moc and friends in their target architecture version, they would transparently succeed.

At that point, it would just be a matter of replacing compiler paths to point to the native cross-compiling gcc, and the build would not be slowed down by much.

Fixing bug #781226 would help in making it possible to configure a multiarch version of qmake as the qmake used for cross compiling.

I have not had a chance of trying to cross-build in this way yet.

In the meantime...

Having qtcreator able to work on an amd64 devel machine and deploy/test/debug remotely on an arm target machine, where both machine run debian stable and have libraries in sync, would be a great thing to have even though packages do not cross-build yet.

Helmut summarised the situation on IRC:

svuorela and others repeat that Qt upstream is not compatible with Debian's multiarch thinking, in that Qt upstream insists on having one toolchain for each pair of architectures, whereas the Debian way tends to be to make packages generic and split stuff such that it can be mixed and matched.

An example being that you need to run qmake (thus you need qmake for the build architecture), but qmake also embeds the relevant paths and you need to query it for them (so you need qmake for the host architecture)

Either you run it through qemu, or you have a particular cross qmake for your build/host pair, or you fix qt upstream to stop this madness

Building qmake in Debian for each host-target pair, even just limited to released architectures, would mean building Qt 100 times, and that's not going to scale.

I wonder:

  • can I have a qmake-$ARCH binary that can build a source using locally installed multiarch Qt libraries, do I need to recompile and ship the whole of Qt, or just qmake?
  • is there a recipe for building a cross-building Qt environment that would be able use Debian development libraries installed the normal multiarch way?
  • we can't do perfect yet, but can we do better than this?

27 September, 2017 01:25PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAnnoy 0.0.10

A few short weeks after the more substantial 0.0.9 release of RcppAnnoy, we have a quick bug-fix update.

RcppAnnoy is our Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours.

Michaël Benesty noticed that our getItemsVector() function didn't, ahem, do much besides crashing. Simple bug, they happen--now fixed, and a unit test added.

Changes in this version are summarized here:

Changes in version 0.0.10 (2017-09-25)

  • The getItemsVector() function no longer crashes (#24)

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 September, 2017 02:05AM

September 26, 2017

Enrico Zini

Systemd timer units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.timer units

Configure activation of other units (usually a .service unit) at some given time.

The functionality is similar to cron, with more features and a finer time granularity. For example, in Debian Stretch apt has a timer for running apt update which runs at a random time to distribute load on servers:

# /lib/systemd/system/apt-daily.timer
[Unit]
Description=Daily apt download activities
After=network-online.target
Wants=network-online.target

[Timer]
OnCalendar=*-*-* 6,18:00
RandomizedDelaySec=12h
Persistent=true

[Install]
WantedBy=timers.target

The corresponding apt-daily.service file then only runs when the system is on mains power, to avoid unexpected batter drains for systems like laptops:

# /lib/systemd/system/apt-daily.service
[Unit]
Description=Daily apt download activities
Documentation=man:apt(8)
ConditionACPower=true

[Service]
Type=oneshot
ExecStart=/usr/lib/apt/apt.systemd.daily update

Note that if you want to schedule tasks with an accuracy under a minute (for example to play a beep every 5 seconds when running on battery), you need to also configure AccuracySec= for the timer to a delay shorter than the default 1 minute.

This is how to make your computer beep when on battery:

# /etc/systemd/system/beep-on-battery.timer
[Unit]
Description=Beeps every 10 seconds

[Install]
WantedBy=timers.target

[Timer]
AccuracySec=1s
OnUnitActiveSec=10s
# /etc/systemd/system/beep-on-battery.service
[Unit]
Description=Beeps when on battery
ConditionACPower=false

[Service]
Type=oneshot
ExecStart=/usr/bin/aplay /tmp/beep.wav

See:

26 September, 2017 10:00PM

hackergotchi for Colin Watson

Colin Watson

A mysterious bug with Twisted plugins

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors
reactors.installReactor(reactor)

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

26 September, 2017 03:20PM by Colin Watson

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live 2017.20170926-1

A full month or more has past since the last upload of TeX Live, so it was high time to prepare a new package. Nothing spectacular here I have to say, two small bugs fixed and the usual long list of updates and new packages.

From the new packages I found fontloader-luaotfload and interesting project. Loading fonts via lua code in luatex is by now standard, and this package allows for experiments with newer/alternative font loaders. Another very interesting new-comer is pdfreview which lets you set pages of another PDF on a lined background and add notes to it, good for reviewing.

Enjoy.

New packages

abnt, algobox, beilstein, bib2gls, cheatsheet, coelacanth, dijkstra, dynkin-diagrams, endofproofwd, fetchcls, fixjfm, fontloader-luaotfload, forms16be, hithesis, ifxptex, komacv-rg, ku-template, latex-refsheet, limecv, mensa-tex, multilang, na-box, notes-tex, octave, pdfreview, pst-poker, theatre, upzhkinsoku, witharrows.

Updated packages

2up, acmart, acro, amsmath, animate, babel, babel-french, babel-hungarian, bangorcsthesis, beamer, beebe, biblatex-gost, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bpchem, bxjaprnind, bxjscls, bytefield, checkcites, chemmacros, chet, chickenize, complexity, curves, cweb, datetime2-german, e-french, epstopdf, eqparbox, esami, etoc, fbb, fithesis, fmtcount, fnspe, fontspec, genealogytree, glossaries, glossaries-extra, hvfloat, ifptex, invoice2, jfmutil, jlreq, jsclasses, koma-script, l3build, l3experimental, l3kernel, l3packages, latexindent, libertinust1math, luatexja, lwarp, markdown, mcf2graph, media9, nddiss, newpx, newtx, novel, numspell, ocgx2, philokalia, phfqit, placeat, platex, poemscol, powerdot, pst-barcode, pst-cie, pst-exa, pst-fit, pst-func, pst-geometrictools, pst-ode, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst-vehicle, pst2pdf, pstricks, pstricks-add, ptex-base, ptex-fonts, pxchfon, quran, randomlist, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdef, texinfo, texlive-docindex, texlive-scripts, tikzducks, tikzsymbols, tocloft, translations, updmap-map, uplatex, widetable, xepersian, xetexref, xint, xsim, zhlipsum.

26 September, 2017 03:01PM by Norbert Preining

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

SMS Verification

I’ve received an email today from Barclaycard with the following:

“From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.”

The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

Due to this lack of trust, I’ve often held back from using my phone or tablet for certain tasks where I can still get away with not doing so. I have experimented with having read-only access to my calendars and contacts to ensure that if my phone is compromised they can’t just be wiped out, though in the end I had to give in as my calendar was becoming too difficult to manage using a paper system as part of entry for new events.

I wanted to try to reduce the attractiveness of compromising my phone. Anyone that really wants to have a go at my phone could probably get in. It’s an older Samsung Android phone on a UK network and software updates rarely come through in a timely manner. Anything that I give my phone access to is at risk and that risk needs to be balanced by some real world benefits.

These are just the problems with the phone itself. When you’re using SMS authentication, even with the most secure phone ever, you’re still going to be using the phone network. SMS authentication is about equivalent, in terms of the security it really offers, to your mobile phone number being your password when it comes to an even mildly motivated attacker. You probably don’t treat your mobile phone number as a password, nor does the provider or anyone you’ve given it to, so you can assume that it’s compromised.

Why are mobile phones so popular for two factor (on in increasing numbers of cases, single factor) authentication? Not because they improve security but because they’re convenient and everyone has one. This seems like a bad plan.

26 September, 2017 02:00PM

SMS Verification

I’ve received an email today from Barclaycard with the following: “From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.” The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

26 September, 2017 01:15PM

Reproducible builds folks

Reproducible Builds: Weekly report #126

Here's what happened in the Reproducible Builds effort between Sunday September 17th and Saturday September 23rd 2017:

Media coverage

  • Christos Zoulas gave a talk entitled Reproducible builds on NetBSD at EuroBSDCon 2017

Reproducible work in other packages

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

1 package reviews was added, 49 have been updated and 54 have been removed in this week, adding to our knowledge about identified issues.

One issue type was updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (56)
  • Bas Couwenberg (1)
  • Helmut Grohne (1)
  • Nobuhiro Iwamatsu (2)

diffoscope development

Version 87 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

strip-nondeterminism development

reprotest development

Version 0.7 was uploaded to unstable by Ximin Luo:

tests.reproducible-builds.org

Vagrant Cascadian and Holger Levsen:

  • Re-add and armhf build node that had been disabled due to performance issues, but works linux 4.14-rc1 now! #876212

Holger Levsen:

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

26 September, 2017 07:22AM

Russ Allbery

Review: Artemis Fowl

Review: Artemis Fowl, by Eoin Colfer

Series: Artemis Fowl #1
Publisher: Disney-Hyperion
Copyright: 2001
ISBN: 1-4231-2452-9
Format: Kindle
Pages: 281

Artemis Fowl is the heir to the Fowl criminal empire and a child prodigy. He's also one of the few humans to know of the existence of fairies, who are still present in the world, hiding from humans and living by their own rules. As the book opens, he's in search of those rules: a copy of the book that governs the lives of fairies. With that knowledge, he should be able to pull off a heist worthy of his family's legacy.

Captain Holly Short is a leprechaun... or, more correctly, a LEPrecon. She's one of the fairy police officers that investigate threats to the fairies who are hiding in a vast underground civilization. The fairies have magic, but they also have advanced (and miniaturized) technology, maintained in large part by a grumpy and egotistical centaur (named Foaly, because it's that sort of book). She's also the fairy unlucky enough to be captured by Artemis's formidable personal bodyguard their first attempt to kidnap a hostage for their ransom demands.

This is the first book of a long series of young adult novels that has also spawned graphic novels and a movie currently in production. It has that lean and clear feeling of the younger side of young adult writing: larger-than-life characters who are distinctive and easy to remember, a short introductory setup that dives directly into the main plot, and a story that neatly pulls together every element raised in the story. The world-building is its strongest point, particularly the mix of tongue-in-cheek technology — ships that ride magma plumes, mechanical wings, and helmet-mounted lights to blind trolls — and science-tinged magic that the fairies build their police and army on. Fairies are far beyond humans in capability, and can be deadly and ruthless, but they have to follow a tightly constrained set of rules that are often not convenient.

Sadly, the characters don't live up to the world-building. I did enjoy a few of them, particularly Artemis's loyal bodyguards and the dwarf Mulch Diggums. But Holly, despite being likable, is a bit of a blank slate: the empathetic, overworked trooper who is mostly indistinguishable from other characters in similar stories. The gruff captain, the sarcastic technician Foaly, and the various other LEP agents all felt like they were taken straight from central casting. And then there's Artemis himself.

Artemis is the protagonist of the story, in that he's the one who initiates all of the action and the one who has the most interesting motivations. The story is about him, as the third-person narrator in the introduction makes clear. He's trying very hard to be a criminal genius with the deductive abilities of Sherlock Holmes and the speaking style of a Bond villain, but he's also twelve, his father has disappeared, and his mother is going slowly insane. I picked this book up on the recommendation of another reader who found that contrast compelling.

Unfortunately, I thought Artemis was just an abusive jerk. Yes, yes, family tragedy, yes, he's trapped in his conception of himself, but he's arrogant, utterly uncaring about how his actions affect other people, and dismissive and cruel even to his bodyguards (who are much better friends than he deserves). I think liking this book requires liking Artemis at least well enough to consider him an anti-hero, and I can squint and see that appeal if you have that reaction. But I just wanted him to lose. Not in the "you will be slowly redeemed over the course of a long series" way, but in the "you are a horrible person and I hope you get what's coming to you" way. The humor of the fairy parts of the book was undermined too much by the fact that many of them would like to kill Artemis for real, and I mostly wanted them to succeed.

This may or may not have to do with my low tolerance for egotistical smart-asses who order other people to do things that they refuse to explain.

Without some appreciation for Artemis, this is a story with some neat world-building, a fairly generic protagonist in Holly, and a plot in which the bad guys win. To make matters worse, I thought the supposedly bright note at the end of the story was just creepy, as was everything else involving Artemis's mother. The review I read was of the first three books, so it's entirely possible that this series gets better as it goes along, but there wasn't enough I enjoyed in the first book for me to keep reading.

Followed by Artemis Fowl: The Arctic Incident.

Rating: 5 out of 10

26 September, 2017 04:36AM

September 25, 2017

Enrico Zini

Systemd mount and swap units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.mount and .swap units

Describe mount points similarly as what /etc/fstab, but with more functionality and integration with the dependency system.

It is possible to define, for example, a filesystem that should be mounted only when the network is available and a given service has successfully started, and a service that should be started only after a given filesystem has been successfully mounted.

At boot, systemd uses systemd-fstab-generator to generate mount and swap units from /etc/fstab, so that the usual fstab configuration file can still be used to configure the file system layout.

See man systemd.mount, and man systemd.swap.

See systemctl --all -t mount and systemctl --all -t swap for examples.

25 September, 2017 10:00PM

hackergotchi for Steve Kemp

Steve Kemp

Started work on an internet-of-things Radio

So recently I was in York at the Bytemark office, and I read a piece about building a radio in a Raspberry Pi magazine. It got me curious, so when I got home to sunny Helsinki I figured I'd have a stab at it.

I don't have fixed goal in mind, but what I do have is:

  • A WeMos Mini D1
    • Cost €3.00
    • ESP8266-powered board, which can be programmed easily in C++ and contains on-board WiFi as well as a bunch of I/O pins.
  • A RDA5807M FM Radio chip.
    • Cost 37 cents.
    • With a crystal for support.

The initial goal is simple wire the receiver/decoder to the board, and listen to the radio.

After that there are obvious extenstions, such as adding an LCD display to show the frequency (What's the frequency Kenneth), and later to show the station details, via RDS.

Finally I could add some buttons/switches/tweaks for selecting next/previous stations, and adjusting the volume. Initially that'll be handled by pointing a browser at the IP-address of the device.

The first attempt at using the RDA5807M chip was a failure, as the thing was too damn small and non-standardly sized. Adding header-pins to the chips was almost impossible, and when I did get them soldered on the thing just gave me static-hisses.

However I later read the details of the chip more carefully and realized that it isn't powerfull enough to drive (even) headphones. It requires an amp of some kind. With that extra knowledge I was able to send the output to the powered-speakers I have sat beside my PC.

My code is basic, it sets up the FM-receiver/decoder, and scans the spectrum. When it finds a station it outputs the name over the serial console, via RDS, and then just plays it.

I've got an PAM8403-based amplifier board on-order, when that arrives I'll get back to the project, and hookup WiFi and a simple web-page to store stations, tuning, etc.

My "token goal" at the moment is a radio that switches on at 7AM and switches off at 8AM. In addition to that it'll serve a web-page allowing interactive control, regardless of any buttons that are wired in.

I also have another project in the wings. I've ordered a software-defined radio (USB-toy) which I'm planning to use to plot aircraft in real-time, as they arrive/depart/fly over Helsinki. No doubt I'll write that up too.

25 September, 2017 09:00PM

hackergotchi for Chris Lamb

Chris Lamb

Lintian: We are all Perl developers now

Lintian is a static analysis tool for Debian packages, reporting on various errors, omissions and general quality-assurance issues to maintainers.

I've previously written about my exploits with Lintian as well as authoring a short tutorial on how to write your own Lintian check.

Anyway, I recently uploaded version 2.5.53 about two months since previous release. The biggest changes you may notice are supporting the latest version of the Debian Policy as well the addition of checks to encourage the migration to Python 3.

Thanks to all who contributed patches, code review and bug reports to this release. The full changelog is as follows:

lintian (2.5.53) unstable; urgency=medium

  The "we are all Perl developers now" release.

  * Summary of tag changes:
    + Added:
      - alternatively-build-depends-on-python-sphinx-and-python3-sphinx
      - build-depends-on-python-sphinx-only
      - dependency-on-python-version-marked-for-end-of-life
      - maintainer-script-interpreter
      - missing-call-to-dpkg-maintscript-helper
      - node-package-install-in-nodejs-rootdir
      - override-file-in-wrong-package
      - package-installs-java-bytecode
      - python-foo-but-no-python3-foo
      - script-needs-depends-on-sensible-utils
      - script-uses-deprecated-nodejs-location
      - transitional-package-should-be-oldlibs-optional
      - unnecessary-testsuite-autopkgtest-header
      - vcs-browser-links-to-empty-view
    + Removed:
      - debug-package-should-be-priority-extra
      - missing-classpath
      - transitional-package-should-be-oldlibs-extra

  * checks/apache2.pm:
    + [CL] Fix an apache2-unparsable-dependency false positive by allowing
      periods (".") in dependency names.  (Closes: #873701)
  * checks/binaries.pm:
    + [CL] Apply patches from Guillem Jover & Boud Roukema to improve the
      description of the binary-file-built-without-LFS-support tag.
      (Closes: #874078)
  * checks/changes.{pm,desc}:
    + [CL] Ignore DFSG-repacked packages when checking for upstream
      source tarball signatures as they will never match by definition.
      (Closes: #871957)
    + [CL] Downgrade severity of orig-tarball-missing-upstream-signature
      from "E:" to "W:" as many common tools do not make including the
      signatures easy enough right now.  (Closes: #870722, #870069)
    + [CL] Expand the explanation of the
      orig-tarball-missing-upstream-signature tag to include the location
      of where dpkg-source will look. Thanks to Theodore Ts'o for the
      suggestion.
  * checks/copyright-file.pm:
    + [CL] Address a number of issues in copyright-year-in-future:
      - Prevent false positives in port numbers, email addresses, ISO
        standard numbers and matching specific and general street
        addresses.  (Closes: #869788)
      - Match all violating years in a line, not just the first (eg.
        "2000-2107").
      - Ignore meta copyright statements such as "Original Author". Thanks
        to Thorsten Alteholz for the bug report.  (Closes: #873323)
      - Expand testsuite.
  * checks/cruft.{pm,desc}:
    + [CL] Downgrade severity of file-contains-fixme-placeholder
      tag from "important" (ie. "E:") to "wishlist" (ie. "I:").
      Thanks to Gregor Herrmann for the suggestion.
    + [CL] Apply patch from Alex Muntada (alexm) to use "substr" instead
      of "substring" in mentions-deprecated-usr-lib-perl5-directory's
      description.  (Closes: #871767)
    + [CL] Don't check copyright_hints file for FIXME placeholders.
      (Closes: #872843)
    + [CL] Don't match quoted "FIXME" variants as they are almost always
      deliberate. Thanks to Adrian Bunk for the report.  (Closes: #870199)
    + [CL] Avoid false positives in missing source checks for "CSS Browser
      Selector".  (Closes: #874381)
  * checks/debhelper.pm:
    + [CL] Prevent a false positive of
      missing-build-dependency-for-dh_-command that can be exposed by
      following the advice for the recently added
      useless-autoreconf-build-depends tag.  (Closes: #869541)
  * checks/debian-readme.{pm,desc}:
    + [CL] Ensure readme-debian-contains-debmake-template also checks
      for templates "Automatically generated by debmake".
  * checks/description.{desc,pm}:
    + [CL] Clarify explanation of description-starts-with-leading-spaces
      tag. Thanks to Taylor Kline  for the report
      and patch.  (Closes: #849622)
    + [NT] Skip capitalization-error-in-description-synopsis for
      auto-generated packages (such as dbgsym packages).
  * checks/fields.{desc,pm}:
    + [CL] Ensure that python3-foo packages have "Section: python", not
      just python2-foo.  (Closes: #870272)
    + [RG] Do no longer require debug packages to be priority extra.
    + [BR] Use Lintian::Data for name/section mapping
    + [CL] Check for packages including "?rev=0&sc=0" in Vcs-Browser.
      (Closes: #681713)
    + [NT] Transitional packages should now be "oldlibs/optional" rather
      than "oldlibs/extra".  The related tag has been renamed accordingly.
  * checks/filename-length.pm:
    + [NT] Skip the check on auto-generated binary packages (such as
      dbgsym packages).
  * checks/files.{pm,desc}:
    + [BR] Avoid privacy-breach-generic false positives for legal.xml.
    + [BR] Detect install of node package under /usr/lib/nodejs/[^/]*$
    + [CL] Check for packages shipping compiled Java class files. Thanks
      Carnë Draug .  (Closes: #873211)
    + [BR] Privacy breach is no longer experimental.
  * checks/init.d.desc:
    + [RG] Do not recommend a versioned dependency on lsb-base in
      init.d-script-needs-depends-on-lsb-base.  (Closes: #847144)
  * checks/java.pm:
    + [CL] Additionally consider .cljc files as code to avoid false-
      positive codeless-jar warnings.  (Closes: #870649)
    + [CL] Drop problematic missing-classpath check.  (Closes: #857123)
  * checks/menu-format.desc:
    + [CL] Prevent false positives in desktop-entry-lacks-keywords-entry
      for "Link" and "Directory" .desktop files.  (Closes: #873702)
  * checks/python.{pm,desc}:
    + [CL] Split out Python checks from "scripts" check to a new, source
      check of type "source".
    + [CL] Check for python-foo without corresponding python3-foo packages
      to assist in Python 2.x deprecation.  (Closes: #870681)
    + [CL] Check for packages that Build-Depend on python-sphinx only.
      (Closes: #870730)
    + [CL] Check for packages that alternatively Build-Depend on the
      Python 2 and Python 3 versions of Sphinx.  (Closes: #870758)
    + [CL] Check for binary packages that depend on Python 2.x.
      (Closes: #870822)
  * checks/scripts.pm:
    + [CL] Correct false positives in
      unconditional-use-of-dpkg-statoverride by detecting "if !" as a
      valid shell prefix.  (Closes: #869587)
    + [CL] Check for missing calls to dpkg-maintscript-helper(1) in
      maintainer scripts.  (Closes: #872042)
    + [CL] Check for packages using sensible-utils without declaring a
      dependency after its split from debianutils.  (Closes: #872611)
    + [CL] Warn about scripts using "nodejs" as an interpreter now that
      nodejs provides /usr/bin/node.  (Closes: #873096)
    + [BR] Add a statistic tag giving interpreter.
  * checks/testsuite.{desc,pm}:
    + [CL] Remove recommendations to add a "Testsuite: autopkgtest" field
      to debian/control as it is added when needed by dpkg-source(1)
      since dpkg 1.17.1.  (Closes: #865531)
    + [CL] Warn if we see an unnecessary "Testsuite: autopkgtest" header
      in debian/control.
    + [NT] Recognise "autopkgtest-pkg-go" as a valid test suite.
    + [CL] Recognise "autopkgtest-pkg-elpa" as a valid test suite.
      (Closes: #873458)
    + [CL] Recognise "autopkgtest-pkg-octave" as a valid test suite.
      (Closes: #875985)
    + [CL] Update the description of unknown-testsuite to reflect that
      "autopkgtest" is not the only valid value; the referenced URL
      is out-of-date (filed as #876008).  (Closes: #876003)

  * data/binaries/embedded-libs:
    + [RG] Detect embedded copies of heimdal, libgxps, libquicktime,
      libsass, libytnef, and taglib.
    + [RG] Use an additional string to detect embedded copies of
      openjpeg2.  (Closes: #762956)
  * data/fields/name_section_mappings:
    + [BR] node- package section is javascript.
    + [CL] Apply patch from Guillem Jover to add more section mappings.
      (Closes: #874121)
  * data/fields/obsolete-packages:
    + [MR] Add dh-systemd.  (Closes: #872076)
  * data/fields/perl-provides:
    + [CL] Refresh perl provides.
  * data/fields/virtual-packages:
    + [CL] Update data file from archive. This fixes a false positive for
      "bacula-director".  (Closes: #835120)
  * data/files/obsolete-paths:
    + [CL] Add note to /etc/bash_completion.d entry regarding stricter
      filename requirements.  (Closes: #814599)
  * data/files/privacy-breaker-websites:
    + [BR] Detect custom donation logos like apache.
    + [BR] Detect generic counter website.
  * data/standards-version/release-dates:
    + [CL] Add 4.0.1 and 4.1.0 as known standards versions.
      (Closes: #875509)

  * debian/control:
    + [CL] Mention Debian Policy v4.1.0 in the description.
    + [CL] Add myself to Uploaders.
    + [CL] Drop unnecessary "Testsuite: autopkgtest"; this is implied from
      debian/tests/control existing.

  * commands/info.pm:
    + [CL] Add a --list-tags option to print all tags Lintian knows about.
      Thanks to Rajendra Gokhale for the suggestion.  (Closes: #779675)
  * commands/lintian.pm:
    + [CL] Apply patch from Maia Everett to avoid British spelling when
      using en_US locale.  (Closes: #868897)

  * lib/Lintian/Check.pm:
    + [CL] Stop emitting {maintainer,uploader}-address-causes-mail-loops
      for @packages.debian.org addresses.  (Closes: #871575)
  * lib/Lintian/Collect/Binary.pm:
    + [NT] Introduce an "auto-generated" argument for "is_pkg_class".
  * lib/Lintian/Data.pm:
    + [CL] Modify Lintian::Data's "all" to always return keys in insertion
      order, dropping dependency on libtie-ixhash-perl.

  * helpers/coll/objdump-info-helper:
    + [CL] Apply patch from Steve Langasek to accommodate binutils 2.29
      outputting symbols in a different format on ppc64el.
      (Closes: #869750)

  * t/tests/fields-perl-provides/tags:
    + [CL] Update expected output to match new Perl provides.
  * t/tests/files-privacybreach/*:
    + [CL] Add explicit test for packages including external fonts via
      the Google Font API. Thanks to Ian Jackson for the report.
      (Closes: #873434)
    + [CL] Add explicit test for packages including external fonts via
      the Typekit API via <script/> HTML tags.
  * t/tests/*/desc:
    + [CL] Add missing entries in "Test-For" fields to make
      development/testing workflow less error-prone.

  * private/generate-tag-summary:
    + [CL] git-describe(1) will usually emit 7 hexadecimal digits as the
      abbreviated object name,  However, as this can be user-dependent,
      pass --abbrev=0 to ensure it does not vary between systems.  This
      also means we do not need to strip it ourselves.
  * private/refresh-*:
    + [CL] Use deb.debian.org as the default mirror.
    + [CL] Update locations of Contents-<arch> files; they are now
      namespaced by distribution (eg. "main").

 -- Chris Lamb <lamby@debian.org>  Wed, 20 Sep 2017 09:25:06 +0100

25 September, 2017 04:26PM

Lior Kaplan

Recruiting for Open Source jobs

Part of services of Kaplan open source consulting is recruiting services to help companies find good open source people. In addition, we also try to help the community to find open source friendly businesses to work at.

Expect the “Usual Suspects” (e.g. RedHat), I encounter job descriptions which convince me these companies know the advantages of using open source projects and hiring open source people.

A few recent examples I found in Israel:

  • Advantages: People who like to build stuff (we really like people who maintain/contribute to open source projects) (Wizer Research)
  • You will: Incubate and contribute to open source projects (iguazio)
  • The X factor – significant contribution to an open-source community (unnamed startup)
  • An example open source project our team released is CoreML (Apple)
  • Job Responsibilities: Write open-source tools and contribute to open-source projects. (unnamed startup)
  • We’d like to talk to people who: Appreciate open-source culture and philosophy. (Seeking Alpha)

From the applicant side, the possibility to know which code base he or she is going to work on could help do a better and more educated choice about the offered position. While from the company side, getting “hard” evidence of what are the applicant capabilities and code looks like instead of just describing them or trying to demonstrate them on short tests. Not to mention the applicant’s ability to work as part of a team or community.

For the Israeli readers, you can see the full list at https://kaplanopensource.co.il/jobs/


Filed under: Israeli Community, Open source businesses

25 September, 2017 01:34PM by Kaplan

September 24, 2017

Julian Andres Klode

APT 1.5 is out

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty.

This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https.

Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files:

The rc.local.service unit specified After=network-online.target, and login stuff was After=rc.local.service, and apt-daily.timer was Wants=network-online.target, causing network-online.target to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot.

An earlier less intrusive variant of that fix is in 1.4.8: It just moves the network-online.target Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time.

Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I’d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8)

We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again).

We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load libudev.so.0 at runtime, but that library had an SONAME change to libudev.so.1 – we now link against it normally.

Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable.

Paul Wise contributed patches to allow configuring the apt-daily intervals more easily – apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify “always” to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms).

Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don’t expect any life-changing changes for now.

I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs.

Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so …).


Filed under: Debian, Ubuntu

24 September, 2017 07:32PM by Julian Andres Klode

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.3

A maintenance update RcppGSL 0.3.3 is now on CRAN. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.3 (2017-09-24)

  • We also check for gsl-config at package load.

  • The vignette now uses the pinp package in two-column mode.

  • Minor other fixes to package and testing infrastructure.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 September, 2017 03:41PM

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Free Software Efforts (2017W38)

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

24 September, 2017 11:00AM

Free Software Efforts (2017W38)

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

Debian

I’ve prepared and tested backports for 3 packages in the tasktools packaging team: tasksh, bugwarrior and powerline-taskwarrior. Unfortunately I am not currently in the backports ACLs and so I can’t upload these but I’m hoping this to be resolved soon. Once these are uploaded, the latest upstream release for all packages in the tasktools team will be available either in the stable suite or in the stable backports suite.

In preparation for the shutdown of Alioth mailing lists, I’ve set up a new mailing list for the tasktools team and have already updated the maintainer fields for all the team’s packages in git. I’ve subscribed the old mailing list’s user to the new mailing list in DDPO so there will still be a comprehensive view there during the migration. I am currently in the process of reaching out to the admins of git.tasktools.org with a view to moving our git repositories there.

I’ve also continued to review the scapy package and have closed a couple more bugs that were already fixed in the latest upstream release but had been missed in the changelog.

Bugs closed (fixed/wontfix): #774962, #850570

Tor Project

I’ve deployed a small fix to an update from last week where the platform field on Atlas had been pulled across to the left column. It has now been returned to the right hand column and is not pushed down the page by long family lists.

I’ve been thinking about the merge of Compass functionality into a future Atlas and this is being tracked in #23517.

Tor Project has approved expenses (flights and hotel) for me to attend an in-person meeting of the Metrics Team. This meeting will occur in Berlin on the 28th September and I will write up a report detailing outcomes relevant to my work after the meeting. I have spent some time this week preparing for this meeting.

Bugs closed (fixed/wontfix): #22146, #22297, #23511

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The loss of my primary development machine was a setback, however, I have been donated a new workstation which should hopefully arrive soon. The hard drives in my NAS can now also be replaced as I have budget available for this now. I do not see any hardware failures being imminent at this time, however should they occur I would not have budget to replace hardware, I only have funds to replace the hardware that has already failed.

24 September, 2017 11:00AM

Petter Reinholdtsen

Easier recipe to observe the cell phones around you

A little more than a month ago I wrote how to observe the SIM card ID (aka IMSI number) of mobile phones talking to nearby mobile phone base stations using Debian GNU/Linux and a cheap USB software defined radio, and thus being able to pinpoint the location of people and equipment (like cars and trains) with an accuracy of a few kilometer. Since then we have worked to make the procedure even simpler, and it is now possible to do this without any manual frequency tuning and without building your own packages.

The gr-gsm package is now included in Debian testing and unstable, and the IMSI-catcher code no longer require root access to fetch and decode the GSM data collected using gr-gsm.

Here is an updated recipe, using packages built by Debian and a git clone of two python scripts:

  1. Start with a Debian machine running the Buster version (aka testing).
  2. Run 'apt install gr-gsm python-numpy python-scipy python-scapy' as root to install required packages.
  3. Fetch the code decoding GSM packages using 'git clone github.com/Oros42/IMSI-catcher.git'.
  4. Insert USB software defined radio supported by GNU Radio.
  5. Enter the IMSI-catcher directory and run 'python scan-and-livemon' to locate the frequency of nearby base stations and start listening for GSM packages on one of them.
  6. Enter the IMSI-catcher directory and run 'python simple_IMSI-catcher.py' to display the collected information.

Note, due to a bug somewhere the scan-and-livemon program (actually its underlying program grgsm_scanner) do not work with the HackRF radio. It does work with RTL 8232 and other similar USB radio receivers you can get very cheaply (for example from ebay), so for now the solution is to scan using the RTL radio and only use HackRF for fetching GSM data.

As far as I can tell, a cell phone only show up on one of the frequencies at the time, so if you are going to track and count every cell phone around you, you need to listen to all the frequencies used. To listen to several frequencies, use the --numrecv argument to scan-and-livemon to use several receivers. Further, I am not sure if phones using 3G or 4G will show as talking GSM to base stations, so this approach might not see all phones around you. I typically see 0-400 IMSI numbers an hour when looking around where I live.

I've tried to run the scanner on a Raspberry Pi 2 and 3 running Debian Buster, but the grgsm_livemon_headless process seem to be too CPU intensive to keep up. When GNU Radio print 'O' to stdout, I am told there it is caused by a buffer overflow between the radio and GNU Radio, caused by the program being unable to read the GSM data fast enough. If you see a stream of 'O's from the terminal where you started scan-and-livemon, you need a give the process more CPU power. Perhaps someone are able to optimize the code to a point where it become possible to set up RPi3 based GSM sniffers? I tried using Raspbian instead of Debian, but there seem to be something wrong with GNU Radio on raspbian, causing glibc to abort().

24 September, 2017 06:30AM

September 23, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCNPy 0.2.7

A new version of the RcppCNPy package arrived on CRAN yesterday.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

This version updates internals for function registration, but otherwise mostly switches the vignette over to the shiny new pinp two-page template and package.

Changes in version 0.2.7 (2017-09-22)

  • Vignette updated to Rmd and use of pinp package

  • File src/init.c added for dynamic registration

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 September, 2017 07:07PM

Russell Coker

Converting Mbox to Maildir

MBox is the original and ancient format for storing mail on Unix systems, it consists of a single file per user under /var/spool/mail that has messages concatenated. Obviously performance is very poor when deleting messages from a large mail store as the entire file has to be rewritten. Maildir was invented for Qmail by Dan Bernstein and has a single message per file giving fast deletes among other performance benefits. An ongoing issue over the last 20 years has been converting Mbox systems to Maildir. The various ways of getting IMAP to work with Mbox only made this more complex.

The Dovecot Wiki has a good page about converting Mbox to Maildir [1]. If you want to keep the same message UIDs and the same path separation characters then it will be a complex task. But if you just want to copy a small number of Mbox accounts to an existing server then it’s a bit simpler.

Dovecot has a mb2md.pl script to convert folders [2].

cd /var/spool/mail
mkdir -p /mailstore/example.com
for U in * ; do
  ~/mb2md.pl -s $(pwd)/$U -d /mailstore/example.com/$U
done

To convert the inboxes shell code like the above is needed. If the users don’t have IMAP folders (EG they are just POP users or use local Unix MUAs) then that’s all you need to do.

cd /home
for DIR in */mail ; do
  U=$(echo $DIR| cut -f1 -d/)
  cd /home/$DIR
  for FOLDER in * ; do
    ~/mb2md.pl -s $(pwd)/$FOLDER -d /mailstore/example.com/$U/.$FOLDER
  done
  cp .subscriptions /mailstore/example.com/$U/ subscriptions
done

Some shell code like the above will convert the IMAP folders to Maildir format. The end result is that the users will have to download all the mail again as their MUA will think that every message had been deleted and replaced. But as all servers with significant amounts of mail or important mail were probably converted to Maildir a decade ago this shouldn’t be a problem.

23 September, 2017 03:52AM by etbe

September 21, 2017

hackergotchi for Clint Adams

Clint Adams

PTT

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Today I took a pic of myself pulling a train,” announced Adrian.

Spaniard pulling a train

Posted on 2017-09-21
Tags: bgs

21 September, 2017 10:32PM

September 20, 2017

hackergotchi for Steve Kemp

Steve Kemp

Retiring the Debian-Administration.org site

So previously I've documented the setup of the Debian-Administration website, and now I'm going to retire it I'm planning how that will work.

There are currently 12 servers powering the site:

  • web1
  • web2
  • web3
  • web4
    • These perform the obvious role, serving content over HTTPS.
  • public
    • This is a HAProxy host which routes traffic to one of the four back-ends.
  • database
    • This stores the site-content.
  • events
    • There was a simple UDP-based protocol which sent notices here, from various parts of the code.
    • e.g. "Failed login for bob from 1.2.3.4".
  • mailer
    • Sends out emails. ("You have a new reply", "You forgot your password..", etc)
  • redis
    • This stored session-data, and short-term cached content.
  • backup
    • This contains backups of each host, via Obnam.
  • beta
    • A test-install of the codebase
  • planet
    • The blog-aggregation site

I've made a bunch of commits recently to drop the event-sending, since no more dynamic actions will be possible. So events can be retired immediately. redis will go when I turn off logins, as there will be no need for sessions/cookies. beta is only used for development, so I'll kill that too. Once logins are gone, and anonymous content is disabled there will be no need to send out emails, so mailer can be shutdown.

That leaves a bunch of hosts left:

  • database
    • I'll export the database and kill this host.
    • I will install mariadb on each web-node, and each host will be configured to talk to localhost only
    • I don't need to worry about four database receiving diverging content as updates will be disabled.
  • backup
  • planet
    • This will become orphaned, so I think I'll just move the content to the web-nodes.

All in all I think we'll just have five hosts left:

  • public to do the routing
  • web1-web4 to do the serving.

I think that's sane for the moment. I'm still pondering whether to export the code to static HTML, there's a lot of appeal as the load would drop a log, but equally I have a hell of a lot of mod_rewrite redirections in place, and reworking all of them would be a pain. Suspect this is something that will be done in the future, maybe next year.

20 September, 2017 09:00PM

September 19, 2017

hackergotchi for Gunnar Wolf

Gunnar Wolf

Call to Mexicans: Open up your wifi #sismo

Hi friends,

~3hr ago, we just had a big earthquake, quite close to Mexico City. Fortunately, we are fine, as are (at least) most of our friends and family. Hopefully, all of them. But there are many (as in, tens) damaged or destroyed buildings; there have been over 50 deceased people, and numbers will surely rise until a good understanding of the event's strength are evaluated.

Mainly in these early hours after the quake, many people need to get in touch with their families and friends. There is a little help we can all provide: Provide communication.

Open up your wireless network. Set it up unencrypted, for anybody to use.

Refrain from over-sharing graphical content — Your social network groups don't need to see every video and every photo of the shaking moments and of broken buildings. Download of all those images takes up valuable time-space for the saturated cellular networks.

This advice might be slow to flow... The important moment to act is two or three hours ago, even now... But we are likely to have replicas; we are likely to have panic moments again. Do a little bit to help others in need!

19 September, 2017 09:52PM by gwolf

Sylvain Beucler

dot-zed extractor

Following last week's .zed format reverse-engineered specification, Loïc Dachary contributed a POC extractor!
It's available at http://www.dachary.org/loic/zed/, it can list non-encrypted metadata without password, and extract files with password (or .pem file).
Leveraging on python-olefile and pycrypto, only 500 lines of code (test cases excluded) are enough to implement it :)

19 September, 2017 07:29PM

Reproducible builds folks

Reproducible Builds: Weekly report #125

Here's what happened in the Reproducible Builds effort between Sunday September 10 and Saturday September 16 2017:

Upcoming events

Reproduciblity work in Debian

devscripts/2.17.10 was uploaded to unstable, fixing #872514. This adds a script to report on reproducibility status of installed packages written by Chris Lamb.

#876055 was opened against Debian Policy to decide the precise requirements we should have on a build's environment variables.

Bugs filed:

Non-maintainer uploads:

  • Holger Levsen:

Reproduciblity work in other projects

Patches sent upstream:

  • Bernhard M. Wiedemann:

Reviews of unreproducible packages

16 package reviews have been added, 99 have been updated and 92 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

diffoscope development

  • Juliana Oliveira Rodrigues:
    • Fix comparisons between different container types not comparing inside files. It was caused by falling back to binary comparison for different file types even for unextracted containers.
    • Add many tests for the fixed behaviour.
    • Other code quality improvements.
  • Chris Lamb:
    • Various code quality and style improvements, some of it using Flake8.
  • Mattia Rizzolo:
    • Add a check to prevent installation with python < 3.4

reprotest development

  • Ximin Luo:
    • Split up the very large __init__.py and remove obsolete earlier code.
    • Extend the syntax for the --variations flag to support parameters to certain variations like user_group, and document examples in README.
    • Add a --vary flag for the new syntax and deprecate --dont-vary.
    • Heavily refactor internals to support > 2 builds.
    • Support >2 builds using a new --extra-build flag.
    • Properly sanitize artifact_pattern to avoid arbitrary shell execution.

trydiffoscope development

Version 65 was uploaded to unstable by Chris Lamb including these contributions:

  • Chris Lamb:
    • Packaging maintenance updates.
    • Developer documentation updates.

Reproducible websites development

tests.reproducible-builds.org

  • Vagrant Cascadian and Holger Levsen:
    • Added two armhf boards to the build farm. #874682
  • Holger also:
    • use timeout to limit the diffing of the two build logs to 30min, which greatly reduced jenkins load again.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Daniel Shahaf & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

19 September, 2017 05:45PM

September 18, 2017

Carl Chenet

The Github threat

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn’t seem to fade away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we’ll examine their validity.

1. Critical points

1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

 

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don’t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as “out of date”. The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

 

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers’ working space.

Uniforms always bring Army in my mind, here the Clone army

2 – Critical points cross-examination

2.1 Regarding the centralization

2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn’t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp’s SourceForge account was hacked by… SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders’ pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software

2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn’t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we’re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can’t be limited only to the code. It’s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn’t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system?

One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project’s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository’s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it’s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let’s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost.

Some also use Github as a bookmark place. They follow their favorite projects’ activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it’s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago.

It’s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn’t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software… What’s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there’s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I’ve never thought about: laziness. For those who don’t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions.

Since when Free Software project facing a roadblock request for clemency and don’t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn’t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don’t like what Github offers? Switch to Gitlab. You don’t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let’s be crystal clear. I’ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in 2015.

18 September, 2017 10:00PM by Carl Chenet

Russ Allbery

Consolidation haul

My parents are less fond than I am of filling every available wall in their house with bookshelves and did a pruning of their books. A lot of them duplicated other things that I had, or didn't sound interesting, but I still ended up with two boxes of books (and now have to decide which of my books to prune, since I'm out of shelf space).

Also included is the regular accumulation of new ebook purchases.

Mitch Albom — Tuesdays with Morrie (nonfiction)
Ilona Andrews — Clean Sweep (sff)
Catherine Asaro — Charmed Sphere (sff)
Isaac Asimov — The Caves of Steel (sff)
Isaac Asimov — The Naked Sun (sff)
Marie Brennan — Dice Tales (nonfiction)
Captain Eric "Winkle" Brown — Wings on My Sleeve (nonfiction)
Brian Christian & Tom Griffiths — Algorithms to Live By (nonfiction)
Tom Clancy — The Cardinal of the Kremlin (thriller)
Tom Clancy — The Hunt for the Red October (thriller)
Tom Clancy — Red Storm Rising (thriller)
April Daniels — Sovereign (sff)
Tom Flynn — Galactic Rapture (sff)
Neil Gaiman — American Gods (sff)
Gary J. Hudson — They Had to Go Out (nonfiction)
Catherine Ryan Hyde — Pay It Forward (mainstream)
John Irving — A Prayer for Owen Meany (mainstream)
John Irving — The Cider House Rules (mainstream)
John Irving — The Hotel New Hampshire (mainstream)
Lawrence M. Krauss — Beyond Star Trek (nonfiction)
Lawrence M. Krauss — The Physics of Star Trek (nonfiction)
Ursula K. Le Guin — Four Ways to Forgiveness (sff collection)
Ursula K. Le Guin — Words Are My Matter (nonfiction)
Richard Matheson — Somewhere in Time (sff)
Larry Niven — Limits (sff collection)
Larry Niven — The Long ARM of Gil Hamilton (sff collection)
Larry Niven — The Magic Goes Away (sff)
Larry Niven — Protector (sff)
Larry Niven — World of Ptavvs (sff)
Larry Niven & Jerry Pournelle — The Gripping Hand (sff)
Larry Niven & Jerry Pournelle — Inferno (sff)
Larry Niven & Jerry Pournelle — The Mote in God's Eye (sff)
Flann O'Brien — The Best of Myles (nonfiction)
Jerry Pournelle — Exiles to Glory (sff)
Jerry Pournelle — The Mercenary (sff)
Jerry Pournelle — Prince of Mercenaries (sff)
Jerry Pournelle — West of Honor (sff)
Jerry Pournelle (ed.) — Codominium: Revolt on War World (sff anthology)
Jerry Pournelle & S.M. Stirling — Go Tell the Spartans (sff)
J.D. Salinger — The Catcher in the Rye (mainstream)
Jessica Amanda Salmonson — The Swordswoman (sff)
Stanley Schmidt — Aliens and Alien Societies (nonfiction)
Cecilia Tan (ed.) — Sextopia (sff anthology)
Lavie Tidhar — Central Station (sff)
Catherynne Valente — Chicks Dig Gaming (nonfiction)
J.E. Zimmerman — Dictionary of Classical Mythology (nonfiction)

This is an interesting tour of a lot of stuff I read as a teenager (Asimov, Niven, Clancy, and Pournelle, mostly in combination with Niven but sometimes his solo work).

I suspect I will no longer consider many of these books to be very good, and some of them will probably go back into used bookstores after I've re-read them for memory's sake, or when I run low on space again. But all those mass market SF novels were a big part of my teenage years, and a few (like Mote In God's Eye) I definitely want to read again.

Also included is a random collection of stuff my parents picked up over the years. I don't know what to expect from a lot of it, which makes it fun to anticipate. Fall vacation is coming up, and with it a large amount of uninterrupted reading time.

18 September, 2017 12:34AM

September 17, 2017

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- September 2017

Here’s a summary of the bugs against the Debian Policy Manual. Please consider getting involved, whether or not you’re an existing contributor.

Consensus has been reached and help is needed to write a patch

#172436 BROWSER and sensible-browser standardization

#273093 document interactions of multiple clashing package diversions

#299007 Transitioning perms of /usr/local

#314808 Web applications should use /usr/share/package, not /usr/share/doc/…

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#476810 Please clarify 12.5, “Copyright information”

#484673 file permissions for files potentially including credential informa…

#491318 init scripts “should” support start/stop/restart/force-reload - why…

#556015 Clarify requirements for linked doc directories

#568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar…

#578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al.

#582109 document triggers where appropriate

#587991 perl-policy: /etc/perl missing from Module Path

#592610 Clarify when Conflicts + Replaces et al are appropriate

#613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS)

#614807 Please document autobuilder-imposed build-dependency alternative re…

#628515 recommending verbose build logs

#664257 document Architecture name definitions

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#685746 debian-policy Consider clarifying the use of recommends

#688251 Built-Using description too aggressive

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#773557 Avoid unsafe RPATH/RUNPATH

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#542288 Versions for native packages, NMU’s, and binary only uploads

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#683222 say explicitly that debian/changelog is required in source packages

#688251 Built-Using description too aggressive

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#850729 Documenting special version number suffixes

#874090 Clarify wording of some passages

#874095 copyright-format: Use the “synopsis” term established in the de…

Merged for the next release

#661928 recipe for determining shlib package name

#679751 please clarify package account and home directory location in policy

#683222 say explicitly that debian/changelog is required in source packages

#870915 [5.6.30] Testsuite: There are much more defined values

#872893 Chapters, sections, appendices and numbering

#872895 Include multi-page HTML in package

#872896 An html.tar.gz has leaked into the .deb?

#872900 Very generic info file name

#872950 Too much indirection in info file menus

#873819 upgrading-checklist.txt: typo pgpsignurlmangle in section 4.11 of V…

#874411 missing line breaks in summary of ways maintainers scripts are call…

17 September, 2017 11:04PM

Russ Allbery

Free software log (July and August 2017)

I've wanted to start making one of these posts for a few months but have struggled to find the time. But it seems like a good idea, particularly since I get more done when I write down what I do, so you all get a rather belated one. This covers July and August; hopefully the September one will come closer to the end of September.

Debian

August was DebConf, which included a ton of Policy work thanks to Sean Whitton's energy and encouragement. During DebConf, we incorporated work from Hideki Yamane to convert Policy to reStructuredText, which has already made it far easier to maintain. (Thanks also to David Bremner for a lot of proofreading of the result.) We also did a massive bug triage and closed a ton of older bugs on which there had been no forward progress for many years.

After DebConf, as expected, we flushed out various bugs in the reStructuredText conversion and build infrastructure. I fixed a variety of build and packaging issues and started doing some more formatting cleanup, including moving some footnotes to make the resulting document more readable.

During July and August, partly at DebConf and partly not, I also merged wording fixes for seven bugs and proposed wording (not yet finished) for three more, as well as participated in various Policy discussions.

Policy was nearly all of my Debian work over these two months, but I did upload a new version of the webauth package to build with OpenSSL 1.1 and drop transitional packages.

Kerberos

I still haven't decided my long-term strategy with the Kerberos packages I maintain. My personal use of Kerberos is now fairly marginal, but I still care a lot about the software and can't convince myself to give it up.

This month, I started dusting off pam-krb5 in preparation for a new release. There's been an open issue for a while around defer_pwchange support in Heimdal, and I spent some time on that and tracked it down to an upstream bug in Heimdal as well as a few issues in pam-krb5. The pam-krb5 issues are now fixed in Git, but I haven't gotten any response upstream from the Heimdal bug report. I also dusted off three old Heimdal patches and submitted them as upstream merge requests and reported some more deficiencies I found in FAST support. On the pam-krb5 front, I updated the test suite for the current version of Heimdal (which changed some of the prompting) and updated the portability support code, but haven't yet pulled the trigger on a new release.

Other Software

I merged a couple of pull requests in podlators, one to fix various typos (thanks, Jakub Wilk) and one to change the formatting of man page references and function names to match the current Linux manual page standard (thanks, Guillem Jover). I also documented a bad interaction with line-buffered output in the Term::ANSIColor man page. Neither of these have seen a new release yet.

17 September, 2017 08:08PM

Uwe Kleine-König

IPv6 in my home network

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net

Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.

Firewalling

The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet. (Hello AVM, did you hear me? At least a checkbox for that would be nice.)

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.

17 September, 2017 10:15AM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month.

The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

17 September, 2017 08:24AM by Raphaël Hertzog

September 15, 2017

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, August 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from July. I only worked 10 hours, so I will carry over 6 hours to the next month.

I prepared and released an update on the Linux 3.2 longterm stable branch (3.2.92), and started work on the next update. I rebased the Debian linux package on this version, but didn't yet upload it.

15 September, 2017 05:56PM

hackergotchi for Chris Lamb

Chris Lamb

Which packages on my system are reproducible?

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process.

As part of this project I wrote a script to determine which packages installed on your system are "reproducible" or not:

$ apt install devscripts
[]

$ reproducible-check
[]
W: subversion (1.9.7-2) is unreproducible (libsvn-perl, libsvn1, subversion) <https://tests.reproducible-builds.org/debian/subversion>
W: taglib (1.11.1+dfsg.1-0.1) is unreproducible (libtag1v5, libtag1v5-vanilla) <https://tests.reproducible-builds.org/debian/taglib>
W: tcltk-defaults (8.6.0+9) is unreproducible (tcl, tk) <https://tests.reproducible-builds.org/debian/tcltk-defaults>
W: tk8.6 (8.6.7-1) is unreproducible (libtk8.6, tk8.6) <https://tests.reproducible-builds.org/debian/tk8.6>
W: valgrind (1:3.13.0-1) is unreproducible <https://tests.reproducible-builds.org/debian/valgrind>
W: wavpack (5.1.0-2) is unreproducible (libwavpack1) <https://tests.reproducible-builds.org/debian/wavpack>
W: x265 (2.5-2) is unreproducible (libx265-130) <https://tests.reproducible-builds.org/debian/x265>
W: xen (4.8.1-1+deb9u1) is unreproducible (libxen-4.8, libxenstore3.0) <https://tests.reproducible-builds.org/debian/xen>
W: xmlstarlet (1.6.1-2) is unreproducible <https://tests.reproducible-builds.org/debian/xmlstarlet>
W: xorg-server (2:1.19.3-2) is unreproducible (xserver-xephyr, xserver-xorg-core) <https://tests.reproducible-builds.org/debian/xorg-server>
282/4494 (6.28%) of installed binary packages are unreproducible.

Whether a package is "reproducible" or not is determined by querying the Debian Reproducible Builds testing framework.



The --raw command-line argument lets you play with the data in more detail. For example, you can see who maintains your unreproducible packages:

$ reproducible-check --raw | dd-list --stdin
Alec Leamas <leamas.alec@gmail.com>
   lirc (U)

Alessandro Ghedini <ghedo@debian.org>
   valgrind

Alessio Treglia <alessio@debian.org>
   fluidsynth (U)
   libsoxr (U)
[]


reproducible-check is available in devscripts since version 2.17.10, which landed in Debian unstable on 14th September 2017.

15 September, 2017 08:29AM

September 14, 2017

Lior Kaplan

Public money? Public Code!

An open letter published today to the EU government says:

Why is software created using taxpayers’ money not released as Free Software?
We want legislation requiring that publicly financed software developed for the public sector be made publicly available under a Free and Open Source Software licence. If it is public money, it should be public code as well.

Code paid by the people should be available to the people!

See https://publiccode.eu/ for the campaign details.

This makes me think of starting an Israeli version…


Filed under: Uncategorized

14 September, 2017 09:30AM by Kaplan

hackergotchi for James McCoy

James McCoy

devscripts needs YOU!

Over the past 10 years, I've been a member of a dwindling team of people maintaining the devscripts package in Debian.

Nearly two years ago, I sent out a "Request For Help" since it was clear I didn't have adequate time to keep driving the maintenance.

In the mean time, Jonas split licensecheck out into its own project and took over development. Osamu has taken on much of the maintenance for uscan, uupdate, and mk-origtargz.

Although that has helped spread the maintenance costs, there's still a lot that I haven't had time to address.

Since Debian is still fairly early in the development cycle for Buster, I've decided this is as good a time as any for me to officially step down from active involvement in devscripts. I'm willing to keep moderating the mailing list and other related administrivia (which is fairly minimal given the repo is part of collab-maint), but I'll be unsubscribing from all other notifications.

I think devscripts serves as a good funnel for useful scripts to get in front of Debian (and its derivatives) developers, but Jonas may also be onto something by pulling scripts out to stand on their own. One of the troubles with "bucket" packages like devscripts is the lack of visibility into when to retire scripts. Breaking scripts out on their own, and possibly creating multiple binary packages, certainly helps with that. Maybe uscan and friends would be a good next candidate.

At the end of the day, I've certainly enjoyed being able to play my role in helping simplify the life of all the people contributing to Debian. I may come back to it some day, but for now it's time to let someone else pick up the reins.

If you're interested in helping out, you can join #devscripts on OFTC and/or send a mail to <devscripts-devel@lists.alioth.debian.org>.

14 September, 2017 03:18AM

September 13, 2017

hackergotchi for Shirish Agarwal

Shirish Agarwal

Android, Android marketplace and gaming addiction.

This would be a longish piece so please bear and play with tea, coffee, beer or anything stronger that you desire while reading below 🙂

I had bought an Android phone, a Samsung J5 just before going to debconf 2016. It was more for being in-trend rather than really using it. The one which I shared is the upgraded version (recentish) the one I have is 2 GB for which I had paid around double of what the list price was. The only reason I bought the model is that it had ‘removable battery’ at the price point I was willing to pay. I did see that Samsung has the same ham-handed issues with audio as previous Nokia devices use to, the speakers and microphone probably the cheapest you can get on the market. Nokia was same too, at least on the lower-end of the market, while Oppo has loud ringtones and loud music, perfect for those who are a bit hard of hearing (as yours truly is).

I had been pleasantly surprised by the quality of photos Samsung J5 was churning even though I’m less than average shooter, never been really into it so was a sort of wake-up call for where camera sensor technology is advancing. And of course with newer phones the kind of detail it can capture is mesmerizing to say the least, although wide-angle shots still would take some time to get right I guess.

If memory serves me right, sometime back Laura Arjona Reina (who handles part of debian-publicity and part of debian-women among other responsibilities) shared a blog post on p.d.o. where she had shared the troubles she had while exporting data from the phone. While she shared that and I lack the time or the energy to try and find it ( the entry is really bookmarkable, at least that specific blog post).

What was interesting though that I had gone few years ago to Bangalore, there is an organization which I like and admire CIS great for researchers. Anyways they had done a project getting between 10-20 phones from the market made of Chinese origin (almost all mobiles sold in India, the fabrication of CPU and APU etc. are done in China/Taiwan and then assembled here). Here what is done at the most is assembly which for all political purposes is called ‘manufacturing’ . All the mobiles kept quite a bit of info. on the device even after you wiped them clean/put some other ROM on them. The CIS site is more than a bit cluttered otherwise would have shared the direct link. I do hope to send an e-mail to CIS and hopefully they will respond with the report and will share that here as and when. It would be interesting to know if after people flash a custom rom if the status quo is the same as it was before. I do suspect it would be the former as flashing ROMs on phones is still a bit of specialized subject at least here in India with even an average phone costing a month or two’s salary or more and the idea of bricking the phone scares most people (including yours truly).

Anyways, for a long time I was on bed and had the phone. I used 2 games from the android marketplace which both mum and I enjoy and enjoyed. Those are Real jigsaw and Jigsaw puzzle HD . The permissions dialog which Real jigsaw among other games has is horrible and part of me freaks that all such apps. have unrestricted access to my storage area. Ideally, what Android should have done is either partition or give functionality to the user to have private space for their photos and whatever media they have and the rest of the area is like a public park. If anybody has any thoughts on partitioning on Android phone would like to hear that.

One game though which really hooked mumma and me is ‘The Island Experiment‘ . It reminded me of my younger days when gaming addiction was not treated as a disease but thankfully now is . I would call myself somewhat of a ‘functional addict’ as in do my every day things, work etc. but do dream about the game as to what it will show me next. A part of it is the game is web-based (which means it needs constant internet connection) and web access is somewhat pricey, although with Reliance Jio an upcoming data network operator having bundles of money and promising the moon, network issues at least on low-bandwidth game which I and mum are playing hopefully will not have any issues. I haven’t used tshark or any such tool to analyze the traffic but I guess it probably just sends short messages of number of clicks in a time period and things like that, all the rest (I guess) is happening on the mobile itself. I know at sometime I probably will try to put a custom rom on it but which one is the question as there are so many and also which is most compatible with my device. It seems I would have to do lot of homework before I can make any choices.

Couple of months back, a friend of mine Akshat who has been using Android for few years enabled Developer Options which I didn’t know about till he shared that info. with me. I do hope people do check Akshat’s repo. as he has made/has quite a few useful scripts, especially if you are into digital photography. I have shared with him gimp scripting few days back so along with imagemagick you might see him doing some rough scripts in it. Of course, if people use it and give feedback he might clean the scripts a bit so it gives useful error messages and gives statement like ‘gimp is not installed on your system, please install it or ask for specific version’ but as it works in free software it is somewhat directly proportional to the number of users, bugs and users behind it.

A good example of what I mean is youtube-dl . I filed 873853 where I shared the upstream ticket. Apparently YouTube again changed few days back and while upstream has fixed it, the youtube-dl maintainer probably needs to find time and get the new version up. Apparently the issues lies in –

[$] dpkg -L youtube-dl | grep youtube.py
/usr/lib/python3/dist-packages/youtube_dl/extractor/youtube.py

Hopefully somebody does the needful.

Btw, I find f-droid extremely useful and especially osmand but sadly both of them are not shared or talked about by people 😦

The reason I shared about Developer Options in Android is that few days back I noticed that the phone wonks off and has unpredictable behaviour such as not letting me browse the web, do additions or deletions using the google play store and alike . Things came to a head when few days back I saw a fibre-optic splicing operation being carried by some workers near my home by the state operator which elated me and wanted to shoot the video for it but the battery died/there was no power even though I hadn’t used it much. I have deliberately shared the hindi version which tells how that knowledge is now coming to the masses. I had seen fibre-optic splicing more than a decade and a half back at some net conference where it was going to be in your neighbourhood soonish, hopefully it will happen soon 🙂

I had my suspicions for quite sometime all the issues with the phone were not due to proper charging. During course of my investigation, found out that in Developer Options there is an option called USB Configuration and changing that from the default MTP (Media Transfer Protocol) (which is basically used to put or take movies, music or any file from the phone to the computer or vice-versa improved much better behaviour on my android phone. But this caused an unexpected side-effect, I got pretty aggressive polling of the phone by the computer even after saying I do not want to share the phone details with the computer. This I filed as 874216 . The phone and I am guessing most Samsung phones today come with an adaptor with a USB male socket which goes in to the phone’s usb port. There is the classical port for electricity but like most people heavily rely on usb charging even for deep fully powered down phone for full charging.

One interesting project which I saw which came in Debian some days back is dummydroid. I did file a bug about it . I do hope that either the maintainer gives some more documentation. I am sure many people might use and add to the resource if the documentation was/is there. I did take a look at the package and the profile seems to be like an xml pair kind of database. Having more profiles shouldn’t be hard if we knew what info. needs to be added and how do we find that info.

Lastly, I am slowly transferring all the above knowledge to my mum as well, although in small doses. She, just like me has and had problems coming from resistive touchscreen to capacitive touchscreen. You can call me wrong but resistive touchscreen seemed to be superior and not as error-prone or liable to commit mistakes as is possible in capacitive touchscreens. There may be a setting to higher/lower the threshold for touching which I have not been able to find as of yet.

Hope somebody finds something useful in there. I do hope that Debian does become a replacement to be used on such mobiles but then they would have to duplicate/also have some sort of mainstream content with editors to help people find stuff, something that Debian is not so good at currently. Also I’m not sure Synaptic is good fit as a mobile store.


Filed under: Miscellenous Tagged: #Android, #capacitive touchscreen, #custom ROMs, #digital photography, #dummydroid, #f-droid, #fabrication, #flashing, #game addiction, #Google Play Store, #Mainstreaming Debian, #mobile connectivity, #Oppo, #osmand, #planet-debian, #resistive touchscreen, #Samsung Galaxy J5, #scripting, #USB charging, #USB configuration, #youtube-dl, gaming

13 September, 2017 02:44PM by shirishag75

hackergotchi for Vincent Bernat

Vincent Bernat

Route-based IPsec VPN on Linux with strongSwan

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1:

conn V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64
  authby      = psk
  auto        = route

The same configuration can be used on both sides. Each side will figure out if it is “left” or “right”. The IPsec site-to-site tunnel endpoints are 2001:db8:­1::1 and 2001:db8:­2::1. The protected subnets are 2001:db8:­a1::/64 and 2001:db8:­a2::/64. As a result, strongSwan configures the following policies in the kernel:

$ ip xfrm policy
src 2001:db8:a1::/64 dst 2001:db8:a2::/64
        dir out priority 399999 ptype main
        tmpl src 2001:db8:1::1 dst 2001:db8:2::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir fwd priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir in priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
[…]

This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements:

  • a direction (out, in or fwd2),
  • a selector (source subnet, destination subnet, protocol, ports),
  • a mode (transport or tunnel),
  • an encapsulation protocol (esp or ah), and
  • the endpoint source and destination addresses.

When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses):

$ ip xfrm state
src 2001:db8:1::1 dst 2001:db8:2::1
        proto esp spi 0xc1890b6e reqid 4 mode tunnel
        replay-window 0 flag af-unspec
        auth-trunc hmac(sha256) 0x5b68[…]8ba2904 128
        enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[…]

If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:

$ tcpdump -pni eth0 -c2 -s0 esp
13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222)
13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)

All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs:

Redundant VPNs between 3 sites

A possible configuration between V1-1 and V2-1 could be:

conn V1-1-to-V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64
  authby      = psk
  keyexchange = ikev2
  auto        = route

Each time a subnet is modified on one site, the configurations need to be updated on all sites. Moreover, overlapping subnets (2001:db8:­a6::/64 on one side and 2001:db8:­a6::cc:1/128 at the other) can also be problematic.

The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:

  1. Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

Route-based VPN on Juniper

Before looking at how to achieve that on Linux, let’s have a look at the way it works with a JunOS-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform).

Let’s assume we want to configure the IPsec VPN from V3-2 to V1-1. First, we need to configure the tunnel interface and bind it to the “private” routing instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):

interfaces {
    st0 {
        unit 1 {
            family inet6 {
                address 2001:db8:ff::7/127;
            }
        }
    }
}
routing-instances {
    private {
        instance-type virtual-router;
        interface st0.1;
    }
}

The second step is to configure the VPN:

security {
    /* Phase 1 configuration */
    ike {
        proposal IKE-P1 {
            authentication-method pre-shared-keys;
            dh-group group20;
            encryption-algorithm aes-256-gcm;
        }
        policy IKE-V1-1 {
            mode main;
            proposals IKE-P1;
            pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc";
        }
        gateway GW-V1-1 {
            ike-policy IKE-V1-1;
            address 2001:db8:1::1;
            external-interface lo0.1;
            general-ikeid;
            version v2-only;
        }
    }
    /* Phase 2 configuration */
    ipsec {
        proposal ESP-P2 {
            protocol esp;
            encryption-algorithm aes-256-gcm;
        }
        policy IPSEC-V1-1 {
            perfect-forward-secrecy keys group20;
            proposals ESP-P2;
        }
        vpn VPN-V1-1 {
            bind-interface st0.1;
            df-bit copy;
            ike {
                gateway GW-V1-1;
                ipsec-policy IPSEC-V1-1;
            }
            establish-tunnels on-traffic;
        }
    }
}

We get a route-based VPN because we bind the st0.1 interface to the VPN-V1-1 VPN. Once the VPN is up, any packet entering st0.1 will be encapsulated and sent to the 2001:db8:­1::1 endpoint.

The last step is to configure BGP in the “private” routing instance to exchange routes with the remote site:

routing-instances {
    private {
        routing-options {
            router-id 1.0.3.2;
            maximum-paths 16;
        }
        protocols {
            bgp {
                preference 140;
                log-updown;
                group v4-VPN {
                    type external;
                    local-as 65003;
                    hold-time 6;
                    neighbor 2001:db8:ff::6 peer-as 65001;
                    multipath;
                    export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ];
                }
            }
        }
    }
}

The export filter OUR-ROUTES needs to select the routes to be advertised to the other peers. For example:

policy-options {
    policy-statement OUR-ROUTES {
        term 10 {
            from {
                protocol ospf3;
                route-type internal;
            }
            then {
                metric 0;
                accept;
            }
        }
    }
}

The configuration needs to be repeated for the other peers. The complete version is available on GitHub. Once the BGP sessions are up, we start learning routes from the other sites. For example, here is the route for 2001:db8:­a1::/64:

> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path

private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2001:db8:a1::/64   *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6
                      AS path: 65001 I, validation-state: unverified
                      to 2001:db8:ff::6 via st0.1
                    > to 2001:db8:ff::14 via st0.2

It was learnt both from V1-1 (through st0.1) and V1-2 (through st0.2). The route is part of the private routing instance but encapsulated packets are sent/received in the public routing instance. No route-leaking is needed for this configuration. The VPN cannot be used as a gateway from internal hosts to external hosts (or vice-versa). This could also have been done with JunOS’ security policies (stateful firewall rules) but doing the separation with routing instances also ensure routes from different domains are not mixed and a simple policy misconfiguration won’t lead to a disaster.

Route-based VPN on Linux

Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface3. First, we create the “private” namespace:

# ip netns add private
# ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1

Any “private” interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):

# ip link set netns private dev eth1
# ip link set netns private dev eth2
# ip netns exec private ip link set up dev eth1
# ip netns exec private ip link set up dev eth2

Then, we create vti6, a tunnel interface (similar to st0.1 in the JunOS example):

# ip tunnel add vti6 \
   mode vti6 \
   local 2001:db8:1::1 \
   remote 2001:db8:3::2 \
   key 6
# ip link set netns private dev vti6
# ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_xfrm=1
# ip netns exec private ip link set vti6 mtu 1500
# ip netns exec private ip link set vti6 up

The tunnel interface is created in the initial namespace and moved to the “private” one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work.

We can then configure strongSwan5:

conn V3-2
  left        = 2001:db8:1::1
  leftsubnet  = ::/0
  right       = 2001:db8:3::2
  rightsubnet = ::/0
  authby      = psk
  mark        = 6
  auto        = route
  keyexchange = ikev2
  keyingtries = %forever
  ike         = aes256gcm16-prfsha384-ecp384!
  esp         = aes256gcm16-prfsha384-ecp384!
  mobike      = no

The IKE daemon configures the following policies in the kernel:

$ ip xfrm policy
src ::/0 dst ::/0
        dir out priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:1::1 dst 2001:db8:3::2
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir fwd priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir in priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
[…]

Those policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface.

The last step is to configure BGP to exchange routes. We can use BIRD for this:

router id 1.0.1.1;
protocol device {
   scan time 10;
}
protocol kernel {
   persist;
   learn;
   import all;
   export all;
   merge paths yes;
}
protocol bgp IBGP_V3_2 {
   local 2001:db8:ff::6 as 65001;
   neighbor 2001:db8:ff::7 as 65003;
   import all;
   export where ifname ~ "eth*";
   preference 160;
   hold time 6;
}

Once BIRD is started in the “private” namespace, we can check routes are learned correctly:

$ ip netns exec private ip -6 route show 2001:db8:a3::/64
2001:db8:a3::/64 proto bird metric 1024
        nexthop via 2001:db8:ff::5  dev vti5 weight 1
        nexthop via 2001:db8:ff::7  dev vti6 weight 1

The above route was learnt from both V3-1 (through vti5) and V3-2 (through vti6). Like for the JunOS version, there is no route-leaking between the “private” namespace and the initial one. The VPN cannot be used as a gateway between the two namespaces, only for encapsulation. This also prevent a misconfiguration (for example, IKE daemon not running) from allowing packets to leave the private network.

As a bonus, unencrypted traffic can be observed with tcpdump on the tunnel interface:

$ ip netns exec private tcpdump -pni vti6 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69
20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69

You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs.


  1. Everything in this post should work with Libreswan

  2. fwd is for incoming packets on non-local addresses. It only makes sense in transport mode and is a Linux-only particularity. 

  3. Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces. 

  4. The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn’t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts. 

  5. The ciphers used here are the strongest ones currently possible while keeping compatibility with JunOS. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them. 

13 September, 2017 08:20AM by Vincent Bernat

Reproducible builds folks

Reproducible Builds: Weekly report #124

Here's what happened in the Reproducible Builds effort between Sunday September 3 and Saturday September 9 2017:

Media coverage

GSoC and Outreachy updates

Debian will participate in this year's Outreachy initiative and the Reproducible Builds is soliciting mentors and students to join this round.

For more background please see the following mailing list posts: 1, 2 & 3.

Reproduciblity work in Debian

In addition, the following NMUs were accepted:

Reproduciblity work in other projects

Patches sent upstream:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

3 package reviews have been added, 2 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (15)

diffoscope development

Development continued in git, including the following contributions:

Mattia Rizzolo also uploaded the version 86 released last week to stretch-backports.

reprotest development

tests.reproducible-builds.org

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 September, 2017 07:48AM

September 12, 2017

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in August 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

DebConf 17 in Montreal

I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the “The past, present and future of Debian Games”,  listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out.  No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven’t responded to my refund claims) ? It’s WoW-Air from Iceland. (just wow)

Debian Games

Debian Java

Debian LTS

This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 31. July until 06. August I was in charge of our LTS frontdesk. I triaged bugs in tinyproxy, mantis, sox, timidity, ioquake3, varnish, libao, clamav, binutils, smplayer, libid3tag, mpg123 and shadow.
  • DLA-1064-1. Issued a security update for freeradius fixing 6 CVE.
  • DLA-1068-1. Issued a security update for git fixing 1 CVE.
  • DLA-1077-1. Issued a security update for faad2 fixing 11 CVE.
  • DLA-1083-1. Issued a security update for openexr fixing 3 CVE.
  • DLA-1095-1. Issued a security update for freerdp fixing 5 CVE.

Non-maintainer upload

  • I uploaded a security fix for openexr (#864078) to fix CVE-2017-9110, CVE-2017-9112 and CVE-2017-9116.

Thanks for reading and see you next time.

12 September, 2017 11:04PM by Apo

Arturo Borrero González

Google Hangouts in Debian testing (Buster)

debian-suricata logo

Google offers a lot of software components packaged specifically for Debian and Debian-like Linux distributions. Examples are: Chrome, Earth and the Hangouts plugin. Also, there are many other Internet services doing the same: Spotify, Dropbox, etc. I’m really grateful for them, since this make our life easier.

Problem is that our ecosystem is rather complex, with many distributions and many versions out there. I guess is not an easy task for them to keep such a big variety of support variations.

In this particular case, it seems Google doesn’t support Debian testing in their .deb packages. In this case, testing means Debian Buster. And the same happens with the official Spotify client package.

I’ve identified several issues with them, to name a few:

  • packages depends on lsb-core, no longer present in Buster testing.
  • packages depends on libpango1.0-0, however testing contains libpango-1.0-0

I’m in need of using Google Hangout so I’ve been forced to solve this situation by editing the .deb package provided by Google.

Simple steps:

  • 1) create a temporal working directory
% user@debian:~ $ mkdir pkg
% user@debian:~ $ cd pkg/
  • 2) get the original .deb package, the Google Hangout talk plugin.
% user@debian:~/pkg $ wget https://dl.google.com/linux/direct/google-talkplugin_current_amd64.deb
[...]
  • 3) extract the original .deb package
% user@debian:~/pkg $ dpkg-deb -R google-talkplugin_current_amd64.deb google-talkplugin_current_amd64/
  • 4) edit the control file, replace libpango1.0-0 with libpango-1.0-0
% user@debian:~/pkg $ nano google-talkplugin_current_amd64/DEBIAN/control
  • 5) rebuild the package and install it!
% user@debian:~/pkg $ dpkg -b google-talkplugin_current_amd64
% user@debian:~/pkg $ sudo dpkg -i google-talkpluging_current_amd64.deb

I have yet to investigate how to workaround the lsb-core thing, so still I can’t use Google Earth.

12 September, 2017 07:37PM

September 11, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

rANS encoding of signed coefficients

I'm currently trying to make sense of some still image coding (more details to come at a much later stage!), and for a variety of reasons, I've chosen to use rANS as the entropy coder. However, there's an interesting little detail that I haven't actually seen covered anywhere; maybe it's just because I've missed something, or maybe because it's too blindingly obvious, but I thought I would document what I ended up with anyway. (I had hoped for something even more elegant, but I guess the obvious would have to do.)

For those that don't know rANS coding, let me try to handwave it as much as possible. Your state is typically a single word (in my case, a 32-bit word), which is refilled from the input stream as needed. The encoder and decoder works in reverse order; let's just talk about the decoder. Basically it works by looking at the lowest 12 (or whatever) bits of the decoder state, mapping each of those 2^12 slots to a decoded symbol. More common symbols are given more slots, proportionally to the frequency. Let me just write a tiny, tiny example with 2 bits and three symbols instead, giving four slots:

Lowest bits Symbol
00 0
01 0
10 1
11 2

Note that the zero coefficient here maps to one out of two slots (ie., a range); you don't choose which one yourself, the encoder stashes some information in there (which is used to recover the next control word once you know which symbol there is).

Now for the actual problem: When storing DCT coefficients, we typically want to also store a sign (ie., not just 1 or 2, but also -1/+1 and -2/+2). The statistical distribution is symmetrical, so the sign bit is incompressible (except that of course there's no sign bit needed for 0). We could have done this by introducing new symbols -1 and -2 in addition to our three other ones, but this means we'll need more bits of precision, and accordingly larger look-up tables (which is negative for performance). So let's find something better.

We could also simply store it separately somehow; if the coefficient is non-zero, store the bits in some separate repository. Perhaps more elegantly, you can encode a second symbol in the rANS stream with probability 1/2, but this is more expensive computationally. But both of these have the problem that they're divergent in terms of control flow; nonzero coefficients potentially need to do a lot of extra computation and even loads. This isn't nice for SIMD, and it's not nice for GPU. It's generally not really nice.

The solution I ended up with was simulating a larger table with a smaller one. Simply rotate the table so that the zero symbol has the top slots instead of the bottom slots, and then replicate the rest of the table. For instance, take this new table:

Lowest bits Symbol
000 1
001 2
010 0
011 0
100 0
101 0
110 -1
111 -2

(The observant reader will note that this doesn't describe the exact same distribution as last time—zero has twice the relative frequency as in the other table—but ignore that for the time being.)

In this case, the bottom half of the table doesn't actually need to be stored! We know that if the three bottom bits are >= 110 (6 in decimal), we have a negative value, can subtract 6, and then continue decoding. If we are go past the end of our 2-bit table despite that, we know we are decoding a zero coefficient (which doesn't have a sign), so we can just clamp the read; or for a GPU, reads out-of-bounds on a texture will typically return 0 anyway. So it all works nicely, and the divergent I/O is gone.

If this pickled your interest, you probably want to read up on rANS in general; Fabian Giesen (aka ryg) has some notes that work as a good starting point, but beware; some of this is pretty confusing. :-)

11 September, 2017 11:56PM

September 10, 2017

hackergotchi for Steve Kemp

Steve Kemp

Debian-Administration.org is closing down

After 13 years the Debian-Administration website will be closing down towards the end of the year.

The site will go read-only at the end of the month, and will slowly be stripped back from that point towards the end of the year - leaving only a static copy of the articles, and content.

This is largely happening due to lack of content. There were only two articles posted last year, and every time I consider writing more content I lose my enthusiasm.

There was a time when people contributed articles, but these days they tend to post such things on their own blogs, on medium, on Reddit, etc. So it seems like a good time to retire things.

An official notice has been posted on the site-proper.

10 September, 2017 09:00PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Secure traffic to ZNC on Synology with Let’s Encrypt

I’ve been using IRC since late 1990’s, and I continue to do so to this day due to it (still) being one of the driving development forces in various open source communities. Especially in Linux development … and some of my acquintances I can only get in touch with via IRC :)

My Setup

On my Synology NAS I run ZNC (IRC bouncer/proxy) to which I connect using various IRC clients (irssi/XChat Azure/AndChat) from various platforms (Linux/Mac/Android). In this case ZNC serves as a gateway and no matter which device/client I connect from, I’m always connected to same IRC servers/chat rooms/settings when I left off.

This is all fine and dandy, but connecting from external networks to ZNC means you will hand in your ZNC credentials in plain text. Which is a problem for me, even thought we’re “only” talking about IRC bouncer/proxy.

With that said, how do we encrypt external traffic to our ZNC?

HowTo: Chat securely with ZNC on Synology using Let’s Encrypt SSL certificate

For reference or more thorough explanation of some of the steps/topics please refer to: Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate

Requirements:

  • Synology NAS running DSM >= 6.0
  • Sub/domain name with ability to update DNS records
  • SSH access to your Synology NAS

1: DNS setup

Create A record for sub/domain you’d like to use to connect to your ZNC and point it to your Synology NAS external (WAN) IP. For your reference, subdomain I’ll use is: irc.hodzic.org

2: Create Let’s Encrypt certificate

DSM: Control Panel > Security > Certificates > Add

Followed by:

Add a new certificate > Get a certificate from Let's Encrypt

Followed by adding domain name A record was created for in Step 1, i.e:

Get a certificate from Let's Encrypt for irc.hodzic.org

After certificate is created, don’t forget to configure newly created certificate to point to correct domain name, i.e:

Configure Let's Encrypt Certificate

3: Install ZNC

In case you already have ZNC installed, I suggest you remove it and do a clean install. Mainly due to some problems with package in past, where ZNC wouldn’t start automatically on boot which lead to creating projects like: synology-znc-autostart. In latest version, all of these problems have been fixed and couple of new features have been added.

ZNC can be installed using Synology’s Package Center, if community package sources are enabled. Which can simply be done by adding new package sources:

Name: SynoCommunity
Location: http://packages.synocommunity.com

Enable Community package sources in Synology Package Center

To successfuly authenticate newly added source, under “General” tab, “Trust Level” should be set to “Any publisher”

As part of installation process, ZNC config will be run with most sane/useful options and admin user will be created allowing you access to ZNC webadmin.

4: Secure access to ZNC webadmin

Now we want to bind our sub/domain created in “Step 1” to ZNC webadmin, and secure external access to it. This can be done by creating a reverse proxy.

As part of this, you need to know which port has been allocated for SSL in ZNC Webadmin, i.e:

ZNC Webadmin > Global settings - Listen Ports

In this case, we can see it’s 8251.

Reverse Proxy can be created in:

DSM: Control Panel > Application Portal > Reverse Proxy > Create

Where sub/domain created in “Step 1” needs to be point to ZNC SSL port on localhost, i.e:

Reverse proxy: irc.hodzic.org setup

ZNC Webadmin is now available via HTTPS on external network for the sub/domain you setup as part of Step 1, or in my case:

ZNC webadmin (HTTPS)

As part of this step, in ZNC webadmin I’d advise you to create IRC servers and chatrooms you would like to connect to later.

Step 5: Create .pem file from LetsEncrpyt certificate for ZNC to use

On Synology, Let’s Encrypt certificates are stored and located on:

/usr/syno/etc/certificate/_archive/

In case you have multiple certificates, based on date your certificate was created, you can determine in which directory is your newly generated certificated stored, i.e:

drwx------ 2 root root 4096 Sep 10 12:57 JeRh3Y

Once it’s determined which certifiate is the one we want use, generate .pem by running following:

sudo cat /usr/syno/etc/certificate/_archive/JeRh3Y/{privkey,cert,chain}.pem > /usr/local/znc/var/znc.pem

After this restart ZNC:

sudo /var/packages/znc/scripts/start-stop-status stop && sudo /var/packages/znc/scripts/start-stop-status start

6: Configure IRC client

In this example I’ll use XChat Azure on MacOS, and same procedure should be identical for HexChat/XChat clients on any other platform.

Altough all information is picked up from ZNC itself, user details will need to be filled in.

In my setup I automatically connect to freenode and oftc networks, so I created two for local network and two for external network usage, later is the one we’re concentrating on.

On “General” tab of our newly created server, hostname for our server should be the sub/domain we’ve setup as part of “Step 1”, and port number should be the one we defined in “Step 4”, SSL checkbox must be checked.

Xchat Azure: Network list - General tab

On “Connecting” tab “Server password” field needs to be filled in following format:

johndoe/freenode:password

Where, “johndoe” is ZNC username. “freenode” is ZNC network name, and “password” is ZNC password.

Xchat Azure: Network list - Connecting tab

“freenode” in this case must first be created as part of ZNC webadmin configuration, mentioned in “step 4”. Same case is for oftc network configuration.

As part of establishing the connection, information about our Let’s Encrypt certificate will be displayed, after which connection will be established and you’ll be automatically logged into all chatrooms.

Happy hacking!

10 September, 2017 04:40PM by Adnan Hodzic

intrigeri

Can you reproduce this Tails ISO image?

Thanks to a Mozilla Open Source Software award, we have been working on making the Tails ISO images build reproducibly.

We have made huge progress: since a few months, ISO images built by Tails core developers and our CI system have always been identical. But we're not done yet and we need your help!

Our first call for testing build reproducibility in August uncovered a number of remaining issues. We think that we have fixed them all since, and we now want to find out what other problems may prevent you from building our ISO image reproducibly.

Please try to build an ISO image today, and tell us whether it matches ours!

Build an ISO

These instructions have been tested on Debian Stretch and testing/sid. If you're using another distribution, you may need to adjust them.

If you get stuck at some point in the process, see our more detailed build documentation and don't hesitate to contact us:

Setup the build environment

You need a system that supports KVM, 1 GiB of free memory, and about 20 GiB of disk space.

  1. Install the build dependencies:

    sudo apt install \
        git \
        rake \
        libvirt-daemon-system \
        dnsmasq-base \
        ebtables \
        qemu-system-x86 \
        qemu-utils \
        vagrant \
        vagrant-libvirt \
        vmdebootstrap && \
    sudo systemctl restart libvirtd
    
  2. Ensure your user is in the relevant groups:

    for group in kvm libvirt libvirt-qemu ; do
       sudo adduser "$(whoami)" "$group"
    done
    
  3. Logout and log back in to apply the new group memberships.

Build Tails 3.2~alpha2

This should produce a Tails ISO image:

git clone https://git-tails.immerda.ch/tails && \
cd tails && \
git checkout 3.2-alpha2 && \
git submodule update --init && \
rake build

Send us feedback!

No matter how your build attempt turned out we are interested in your feedback.

Gather system information

To gather the information we need about your system, run the following commands in the terminal where you've run rake build:

sudo apt install apt-show-versions && \
(
  for f in /etc/issue /proc/cpuinfo
  do
    echo "--- File: ${f} ---"
    cat "${f}"
    echo
  done
  for c in free locale env 'uname -a' '/usr/sbin/libvirtd --version' \
            'qemu-system-x86_64 --version' 'vagrant --version'
  do
    echo "--- Command: ${c} ---"
    eval "${c}"
    echo
  done
  echo '--- APT package versions ---'
  apt-show-versions qemu:amd64 linux-image-amd64:amd64 vagrant \
                    libvirt0:amd64
) | bzip2 > system-info.txt.bz2

Then check that the generated file doesn't contain any sensitive information you do not want to leak:

bzless system-info.txt.bz2

Next, please follow the instructions below that match your situation!

If the build failed

Sorry about that. Please help us fix it by opening a ticket:

  • set Category to Build system;
  • paste the output of rake build;
  • attach system-info.txt.bz2 (this will publish that file).

If the build succeeded

Compute the SHA-512 checksum of the resulting ISO image:

sha512sum tails-amd64-3.2~alpha2.iso

Compare your checksum with ours:

9b4e9e7ee7b2ab6a3fb959d4e4a2db346ae322f9db5409be4d5460156fa1101c23d834a1886c0ce6bef2ed6fe378a7e76f03394c7f651cc4c9a44ba608dda0bc

If the checksums match: success, congrats for reproducing Tails 3.2~alpha2! Please send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 successful" and system-info.txt.bz2 attached. Thanks in advance! Then you can stop reading here.

Else, if the checksums differ: too bad, but really it's good news as the whole point of the exercise is precisely to identify such problems :) Now you are in a great position to help improve the reproducibility of Tails ISO images by following these instructions:

  1. Install diffoscope version 83 or higher and all the packages it recommends. For example, if you're using Debian Stretch:

    sudo apt remove diffoscope && \
    echo 'deb http://ftp.debian.org/debian stretch-backports main' \
      | sudo tee /etc/apt/sources.list.d/stretch-backports.list && \
    sudo apt update && \
    sudo apt -o APT::Install-Recommends="true" \
             install diffoscope/stretch-backports
    
  2. Download the official Tails 3.2~alpha2 ISO image.

  3. Compare the official Tails 3.2~alpha2 ISO image with yours:

    diffoscope \
           --text diffoscope.txt \
           --html diffoscope.html \
           --max-report-size 262144000 \
           --max-diff-block-lines 10000 \
           --max-diff-input-lines 10000000 \
           path/to/official/tails-amd64-3.2~alpha2.iso \
           path/to/your/own/tails-amd64-3.2~alpha2.iso
    bzip2 diffoscope.{txt,html}
    
  4. Send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 failed", attaching:

    • system-info.txt.bz2;
    • the smallest file among diffoscope.txt.bz2 and diffoscope.html.bz2, except if they are larger than 100 KiB, in which case better upload the file somewhere (e.g. share.riseup.net and share the link in your email.

Thanks a lot!

Credits

Thanks to Ulrike & anonym who authored a draft on which this blog post is based.

10 September, 2017 02:27PM

Sylvain Beucler

dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :)

(reference version at: https://www.beuc.net/zed/)

.zed archive file format

Introduction

Archives with the .zed extension are conceptually similar to an encrypted .zip file.

In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted.

.zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs.

In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive.

See the conventions section for conventions and acronyms used in this document.

Structure overview

The .zed file format is composed of several layers.

  • The main container is using the (MS-CFB), which is notably used by MS Office 97-2003 .doc files. It contains several streams:

    • Metadata stream: in OLE Property Set format (MS-OLEPS), contains 2 blobs in a specific Type-Length-Value (TLV) format:

      • _ctlfile: global archive properties and access list
        It is obfuscated by means of static-key AES encryption.
        The properties include archive initial filename and a global IV.
        A global encryption key is itself encrypted in each user entry.

      • _catalog: file list
        Contains each file metadata indexed with a 15-bytes identifier.
        Directories are supported.
        Full filename is encrypted using AES.
        File extension is (redundantly) stored in clear, and so are file metadata such as modification time.

    • Each file in the archive compressed with zlib and encrypted with the standard AES algorithm, in a separate stream.
      Several encryption schemes and key sizes are supported.
      The file stream is split in chunks of 512 bytes, individually encrypted.

    • Optional streams, contain additional metadata as well as pictures to display in the application background ("watermarks"). They are not discussed here.

Or as a diagram:

+----------------------------------------------------------------------------------------------------+
| .zed archive (MS-CBF)                                                                              |
|                                                                                                    |
|  stream #1                         stream #2                       stream #3...                    |
| +------------------------------+  +---------------------------+  +---------------------------+     |
| | metadata (MS-OLEPS)          |  | encryption (AES)          |  | encryption (AES)          |     |
| |                              |  | 512-bytes chunks          |  | 512-bytes chunks          |     |
| | +--------------------------+ |  |                           |  |                           |     |
| | | obfuscation (static key) | |  | +-----------------------+ |  | +-----------------------+ |     |
| | | +----------------------+ | |  |-| compression (zlib)    |-|  |-| compression (zlib)    |-|     |
| | | |_ctlfile (TLV)        | | |  | |                       | |  | |                       | | ... |
| | | +----------------------+ | |  | | +---------------+     | |  | | +---------------+     | |     | 
| | +--------------------------+ |  | | | file contents |     | |  | | | file contents |     | |     |
| |                              |  | | |               |     | |  | | |               |     | |     |
| | +--------------------------+ |  |-| +---------------+     |-|  |-| +---------------+     |-|     |
| | | _catalog (TLV)           | |  | |                       | |  | |                       | |     |
| | +--------------------------+ |  | +-----------------------+ |  | +-----------------------+ |     |
| +------------------------------+  +---------------------------+  +---------------------------+     |
+----------------------------------------------------------------------------------------------------+

Encryption schemes

Several AES key sizes are supported, such as 128 and 256 bits.

The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext.

All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical).

No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file).

The following variants were identified in the 'encryption_mode' field.

STREAM

This is the end-of-stream handler for:

  • obfuscated metadata encrypted with static AES key
  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-STREAM"
  • any AES ciphertext of size < 16 bytes, regardless of encryption mode

This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block.

The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext:

  • the second-to-last block of the ciphertext is encrypted in AES-ECB mode (i.e. block cipher encryption only, without XORing with the IV)

  • then XOR-ed with the last partial block (hence truncated to the length of the partial block)

In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block.

CTS

CTS or CipherText Stealing is the end-of-stream handler for:

  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-CTS".
    • exception: if the size of the ciphertext is < 16 bytes, then "STREAM" is used instead.

It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode.

Empty cleartext

Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive.

metadata stream

It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII).

The format used is OLE Property Set (MS-OLEPS).

It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041).

_ctlfile: obfuscated global properties and access list

This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata.

It consists of:

  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by 0100 (hexadecimal) (18 bytes total)
  • 16-byte IV
  • ciphertext
  • 1 uint32be representing the length of all the above
  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by "ZoneCentral (R)" (ASCII) and a NUL byte (32 bytes total)

The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes.

The decrypted text contain properties in a TLV format as described in _ctlfile TLV:

  • global archive properties as a 'fileprops' structure,

  • extra archive properties as a 'archive_extraprops' structure

  • users access list as a series of 'passworduser' and 'rsauser entries.

Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company.

_catalog: file list

This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata.

It contains a series of 'fileprops' TLV structures, one for each file or directory.

The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'.

TLV format

This format is a series of fields :

  • 4 bytes for Type (specified as a 4-bytes hexadecimal below)
  • 4 bytes for value Length (uint32be)
  • Value

Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure.

Unless otherwise noted, TLV structures appear once.

Some fields are optional and may not be present at all (e.g. 'archive_createdwith').

Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser').

The following top-level types that have been identified, and detailed in the next sections:

  • 80110600: fileprops, used for the file list as well as for the global archive properties
  • 001b0600: archive_extraprops
  • 80140600: accesslist

Some additional unidentified types may be present.

_ctlfile TLV

  • 80110600: fileprops (TLV structure): global archive properties
    • 00230400: archive_pathname (UTF-16LE string): initial archive filename (past versions also leaked the full pathname of the initial archive)
    • 80270200: encryption_mode (utf32be): 103 for "AES-CBC-STREAM", 104 for "AES-CBC-CTS"
    • 80260200: encryption_strength (utf32be): AES key size, in bytes (e.g. 32 means AES with a 256-bit key)
    • 80280500: files_iv (sequence of bytes): global IV for all filenames and file contents
  • 001b0600: archive_extraprops (TLV structure): additionnal archive properties (optional)
    • 00c40500: archive_creationtime (FILETIME): date and time when archive was initially created (optional)
    • 00c00400: archive_createdwith (UTF-16LE string): uuid-like structure describing the application that initialized the archive (optional)
      {00000188-1000-3CA8-8868-36F59DEFD14D} is Zed! Free 1.0.188.
  • 80140600: accesslist (TLV structure): describe the users, their key encryption and their permissions
    • 80610600: passworduser (TLV structure): user identified by password (0 or more)
    • 80620600: rsauser (TLV structure): user identified by RSA key (via file or PKCS#11 token) (0 or more)
    • Fields common to passworduser and rsauser:
      • 80710400: login (UTF-16LE string): user name
      • 80720300: login_md5 (sequence of bytes): used by the application to search for a user name
      • 807e0100: priv1 (uchar): user privileges; present and set to 1 when user is admin (optional)
      • 00830200: priv2 (uint32be): user privileges; present and set to 2 when user is admin, present and set to 5 when user is a marked as mandatory, e.g. for recovery keys (optional)
      • 80740500: files_key_ciphertext (sequence of bytes): the archive encryption key, itself encrypted
      • 00840500: user_creationtime (FILETIME): date and time when the user was added to the archive
    • passworduser-specific fields:
      • 80760500: pbe_salt (sequence of bytes): salt for PBE
      • 80770200: pbe_iter (uint32be): number of iterations for PBE
      • 80780200: pkcs12_hashfunc (uint32be): hash function used for PBE and PBA key derivation
      • 80790500: pba_checksum (sequence of bytes): password derived with PBA to check for password validity
      • 807a0500: pba_salt (sequence of bytes): salt for PBA
      • 807b0200: pba_iter (uint32be): number of iterations for PBA
    • rsauser-specific fields:
      • 807d0500: certificate (sequence of bytes): user X509 certificate in DER format

_catalog TLV

  • 80110600: fileprops (TLV structure): describe the archive files (0 or more)
    • 80300500: file_id (sequence of bytes): a 16-byte unique identifier
    • 80310400: filename_halfanon (UTF-16LE string): half-anonymized filename, e.g. File1.txt (leaking filename extension)
    • 00380500: filename_ciphertext (sequence of bytes): encrypted filename; may have a trailing NUL byte once decrypted
    • 80330500: file_size (uint64le): decompressed file size in bytes
    • 80340500: file_creationtime (FILETIME): file creation date and time
    • 80350500: file_lastwritetime (FILETIME): file last modification date and time
    • 80360500: file_lastaccesstime (FILETIME): file last access date and time
    • 00370500: parent_directory_id (sequence of bytes): file_id of the parent directory, 0 is top-level
    • 80320100: is_dir (uint32be): 1 if entry is directory (optional)

Decrypting the archive AES key

rsauser

The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format.

The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate.

passworduser

An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B.

Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC.

The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified:

  • 21: SHA-1
  • 22: SHA-256

PBA - Password-based authentication

The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password.

The parameters for the derivation function are:

  • ID: 3
  • 'pba_salt': the salt, typically an 8-byte random sequence
  • 'pba_iter': the iteration count, typically 200000

The derivation is checked against 'pba_checksum'.

PBE - Password-based encryption

Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function:

  • 'pbe_salt': the salt, typically an 8-bytes random sequence
  • 'pbe_iter': the iteration count, typically 100000

The parameters specific to user key are:

  • ID: 1
  • size: 32

The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties.

The parameters specific to user IV are:

  • ID: 2
  • size: 16

Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding.

Identifying file streams

The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal.

The reordering is:

Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15

The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string.

Decrypting files

The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata).

The IV for each chunk is computed by:

  • expressing the current chunk number as little endian on 16 bytes
  • XORing it with the global IV
  • encrypting with the global AES key in ECB mode (without IV).

Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler.

Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce.

Decompressing files

Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format.

Test cases

Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases.

Conventions and references

Feedback

Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F.

Copyright (C) 2017 Sylvain Beucler

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

10 September, 2017 01:50PM

hackergotchi for Charles Plessy

Charles Plessy

Summary of the discussion on off-line keys.

Last month, there has been an interesting discussion about off-line GnuPG keys and their storage systems on the debian-project@l.d.o mailing list. I tried to summarise it in the Debian wiki, in particular by creating two new pages.

10 September, 2017 12:06PM

hackergotchi for Joachim Breitner

Joachim Breitner

Less parentheses

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.

The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

Context aka. Sections

This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these:

analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
  App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
  Case (analyseExpr scrut) (analyseAlt <$> alts)

analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call:

type Flag = Bool

analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
  App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
  Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)

analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)

I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.”

But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around.

So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!

The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:

type Flag = Bool
section (flag :: Flag) where
  analyseExpr :: Expr -> Expr
  analyseExpr (Var v) = if flag then change1 v else change2v 
  analyseExpr (App e1 e2) =
    App (analyseExpr e1) (analyseExpr e2)
  analyseExpr (Lam v e) = Lam v (analyseExpr e)
  analyseExpr (Case scrut alts) =
    Case (analyseExpr scrut) (analyseAlt <$> alts)

  analyseAlt :: Alt -> Alt
  analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction!

Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.

Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant.

More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there.

Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package!

Less parentheses 1: Bulleted argument lists

The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the $ operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.

For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:

foo :: Baz
foo = bracket
        • some complicated code
          that is evaluated first
        • other complicated code for later
        • even more complicated code

I find this very easy to visually parse and navigate.

It is actually possible to do this now, if one defines (•) = id with infixl 0 •. A dedicated syntax extension (-XArgumentBullets) is preferable:

  • It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
  • I would like to use $ inside these complex arguments, and multiple operators of precedence 0 do not mix. (infixl -1 • would help).
  • It should be possible to nest these, and distinguish different nesting levers based on their indentation.

Less parentheses 1: Whitespace precedence

The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).

Consider this definition of append:

(++) :: [a] -> [a] -> [a]
[]     ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)

Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.

We can do the same in syntax!

(++) :: [a] -> [a] -> [a]
[]   ++ ys = ys
x:xs ++ ys = x : xs++ys

The rule is simple: A sequence of tokens without any space is implicitly parenthesised.

The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).

But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for ( and ), matching them up, building a mental tree, and then digging deeper.

Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.

Some FAQs about this:

  • What about an operator with space on one side but not on the other? I’d simply forbid that, and hence enforce readable code.
  • Do operator sections still require parenthesis? Yes, I’d say so.
  • Does this overrule operator precedence? Yes! a * b+c == a * (b+c).
  • What is a token? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then (Succ a)+b * c parses as ((Succ a)+b) * c, otherwise it should probably simply be illegal.
  • Can we extend this so that one space binds tighter than two spaces, and so on? Yes we can, but really, we should not.
  • This is incompatible with Agda’s syntax! Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.
  • Has this been done before? I have not seen it in any language, but Lewis Wall has blogged this idea before.

Well, let me know what you think!

10 September, 2017 10:10AM by Joachim Breitner (mail@joachim-breitner.de)