October 21, 2017

Michael Stapelberg

pk4: a new tool to avail the Debian source package producing the specified package

UNIX distributions used to come with the system source code in /usr/src. This is a concept which fascinates me: if you want to change something in any part of your system, just make your change in the corresponding directory, recomile, reinstall, and you can immediately see your changes in action.

So, I decided I wanted to build a tool which can give you the impression of that, without the downsides of additional disk space usage and slower update times because of /usr/src maintenance.

The result of this effort is a tool called pk4 (mnemonic: get me the package for…) which I just uploaded to Debian.

What distinguishes this tool from an apt source call is the combination of a number of features:

  • pk4 defaults to the version of the package which is installed on your system. This means when installing the resulting packages, you won’t be forced to upgrade your system in case you’re not running the latest available version.
    In case the package is not installed on your system, the candidate (see apt policy) will be used.
  • pk4 tries hard to resolve the provided argument(s): you can specify Debian binary package names, Debian source package names, or file paths on your system (in which case the owning package will be used).
  • pk4 comes with tab completion for bash and zsh.
  • pk4 caps the disk usage of the checked out packages by deleting the oldest ones after crossing a limit (default: 2GiB).
  • pk4 allows users to enable supplied or shipped-with-pk4 hooks, e.g. git-init.
    The git-init hook in particular results in an experience that reminds of dgit, and in fact it might be useful to combine the two tools in some way.
  • pk4 optimizes for low latency of each operation.
  • pk4 respects your APT configuration, i.e. should work in company intranets.
  • tries hard to download source packages, with fallback to snapshot.debian.org.

If you don’t want to wait for the package to clear the NEW queue, you can get it from here in the meantime:

wget https://people.debian.org/~stapelberg/pk4/pk4_1_amd64.deb
sudo apt install ./pk4_1_amd64.deb

You can find the sources and issue tracker at https://github.com/Debian/pk4.

Here is a terminal screencast of the tool in action, availing the sources of i3, applying a patch, rebuilding the package and replacing the installed binary packages:

21 October, 2017 08:05AM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Sal Mubarak 2074

Wishing all Debian people a prosperous and auspicious Gujarati new year (V.S. 2074 called Saumya.)

This year fireworks became legal for the first time in New Jersey. Not that it ever stopped us before but it is nice to see the government stop meddling for no reason. (Eff you, Indian Supreme Court.)

Sparklers on Diwali.

Although you can only see sparklers in the picture above, we got enough armament to make ISIS jealous. There were also lots of diabetes-inducing sweets and (inexpensive, practical) presents for young and old. That's what I call a proper Diwali and new year.

21 October, 2017 06:14AM

October 20, 2017

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, September 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 170 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month. But we have a new sponsor in the pipe.

The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 49. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

20 October, 2017 01:03PM by Raphaël Hertzog

October 19, 2017

hackergotchi for Clint Adams

Clint Adams

Scuttlebutt at the precinct

So who’s getting patchwork into Debian?

Posted on 2017-10-19
Tags: bamamba

19 October, 2017 10:10PM

Lior Kaplan

Hacktoberfest 2017 @ Tel Aviv

I gave my “Midburn – creating an open source community” talk in Hacktoberfest 2017 @ Tel Aviv. This is the local version of an initiative by DigitalOcean and GitHub.

I was surprised about the vibe both the location and the organizers gave the event, and the fact they could easily arrange t-shirts, pizzas and beer.

Looking at the github search, it seem the worldwide event brings many new contributions. I think we have something to learn how to do a mass concept like that. Especially when it crosses both project limits and country limits. (e.g. not a local bug squashing party).


Filed under: Open source businesses

19 October, 2017 09:22PM by Kaplan

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Introducing Narabu, part 2: Meet the GPU

Narabu is a new intraframe video codec. You may or may not want to read part 1 first.

The GPU, despite being extremely more flexible than it was fifteen years ago, is still a very different beast from your CPU, and not all problems map well to it performance-wise. Thus, before designing a codec, it's useful to know what our platform looks like.

A GPU has lots of special functionality for graphics (well, duh), but we'll be concentrating on the compute shader subset in this context, ie., we won't be drawing any polygons. Roughly, a GPU (as I understand it!) is built up about as follows:

A GPU contains 1–20 cores; NVIDIA calls them SMs (shader multiprocessors), Intel calls them subslices. (Trivia: A typical mid-range Intel GPU contains two cores, and thus is designated GT2.) One such core usually runs the same program, although on different data; there are exceptions, but typically, if your program can't fill an entire core with parallelism, you're wasting energy. Each core, in addition to tons (thousands!) of registers, also has some “shared memory” (also called “local memory” sometimes, although that term is overloaded), typically 32–64 kB, which you can think of in two ways: Either as a sort-of explicit L1 cache, or as a way to communicate internally on a core. Shared memory is a limited, precious resource in many algorithms.

Each core/SM/subslice contains about 8 execution units (Intel calls them EUs, NVIDIA/AMD calls them something else) and some memory access logic. These multiplex a bunch of threads (say, 32) and run in a round-robin-ish fashion. This means that a GPU can handle memory stalls much better than a typical CPU, since it has so many streams to pick from; even though each thread runs in-order, it can just kick off an operation and then go to the next thread while the previous one is working.

Each execution unit has a bunch of ALUs (typically 16) and executes code in a SIMD fashion. NVIDIA calls these ALUs “CUDA cores”, AMD calls them “stream processors”. Unlike on CPU, this SIMD has full scatter/gather support (although sequential access, especially in certain patterns, is much more efficient than random access), lane enable/disable so it can work with conditional code, etc.. The typically fastest operation is a 32-bit float muladd; usually that's single-cycle. GPUs love 32-bit FP code. (In fact, in some GPU languages, you won't even have 8-, 16-bit or 64-bit types. This is annoying, but not the end of the world.)

The vectorization is not exposed to the user in typical code (GLSL has some vector types, but they're usually just broken up into scalars, so that's a red herring), although in some programming languages you can get to swizzle the SIMD stuff internally to gain advantage of that (there's also schemes for broadcasting bits by “voting” etc.). However, it is crucially important to performance; if you have divergence within a warp, this means the GPU needs to execute both sides of the if. So less divergent code is good.

Such a SIMD group is called a warp by NVIDIA (I don't know if the others have names for it). NVIDIA has SIMD/warp width always 32; AMD used to be 64 but is now 16. Intel supports 4–32 (the compiler will autoselect based on a bunch of factors), although 16 is the most common.

The upshot of all of this is that you need massive amounts of parallelism to be able to get useful performance out of a CPU. A rule of thumb is that if you could have launched about a thousand threads for your problem on CPU, it's a good fit for a GPU, although this is of course just a guideline.

There's a ton of APIs available to write compute shaders. There's CUDA (NVIDIA-only, but the dominant player), D3D compute (Windows-only, but multi-vendor), OpenCL (multi-vendor, but highly variable implementation quality), OpenGL compute shaders (all platforms except macOS, which has too old drivers), Metal (Apple-only) and probably some that I forgot. I've chosen to go for OpenGL compute shaders since I already use OpenGL shaders a lot, and this saves on interop issues. CUDA probably is more mature, but my laptop is Intel. :-) No matter which one you choose, the programming model looks very roughly like this pseudocode:

for (size_t workgroup_idx = 0; workgroup_idx < NUM_WORKGROUPS; ++workgroup_idx) {   // in parallel over cores
        char shared_mem[REQUESTED_SHARED_MEM];  // private for each workgroup
        for (size_t local_idx = 0; local_idx < WORKGROUP_SIZE; ++local_idx) {  // in parallel on each core
                main(workgroup_idx, local_idx, shared_mem);
        }
}

except in reality, the indices will be split in x/y/z for your convenience (you control all six dimensions, of course), and if you haven't asked for too much shared memory, the driver can silently make larger workgroups if it helps increase parallelity (this is totally transparent to you). main() doesn't return anything, but you can do reads and writes as you wish; GPUs have large amounts of memory these days, and staggering amounts of memory bandwidth.

Now for the bad part: Generally, you will have no debuggers, no way of logging and no real profilers (if you're lucky, you can get to know how long each compute shader invocation takes, but not what takes time within the shader itself). Especially the latter is maddening; the only real recourse you have is some timers, and then placing timer probes or trying to comment out sections of your code to see if something goes faster. If you don't get the answers you're looking for, forget printf—you need to set up a separate buffer, write some numbers into it and pull that buffer down to the GPU. Profilers are an essential part of optimization, and I had really hoped the world would be more mature here by now. Even CUDA doesn't give you all that much insight—sometimes I wonder if all of this is because GPU drivers and architectures are meant to be shrouded in mystery for competitiveness reasons, but I'm honestly not sure.

So that's it for a crash course in GPU architecture. Next time, we'll start looking at the Narabu codec itself.

19 October, 2017 05:45PM

hackergotchi for Norbert Preining

Norbert Preining

Analysing Debian packages with Neo4j

I just finished the presentation at the Neo4j Online Meetup on getting the Debian UDD into a Neo4j graph database. Besides the usual technical quibbles it did work out quite well.

The code for pulling the data from the UDD, as well as converting and importing it into Neo4j is available on Github Debian-Graph. The slides are also available on Github: preining-debian-packages-neo4j.pdf.

There are still some things I want to implement, time permitting, because it would be a great tool for better integration for Debian. In any case, graph databases are lots of fun to play around.

19 October, 2017 02:21PM by Norbert Preining

hackergotchi for Daniel Pocock

Daniel Pocock

FOSDEM 2018 Real-Time Communications Call for Participation

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 2 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

19 October, 2017 08:33AM by Daniel.Pocock

October 18, 2017

hackergotchi for Joey Hess

Joey Hess

extending Scuttlebutt with Annah

This post has it all. Flotillas of sailboats, peer-to-peer wikis, games, and de-frogging. But, I need to start by talking about some tech you may not have heard of yet...

  • Scuttlebutt is way for friends to share feeds of content-addressed messages, peer-to-peer. Most Scuttlebutt clients currently look something like facebook, but there are also github clones, chess games, etc. Many private encrypted conversations going on. All entirely decentralized.
    (My scuttlebutt feed can be viewed here)

  • Annah is a purely functional, strongly typed language. Its design allows individual atoms of the language to be put in content-addressed storage, right down to data types. So the value True and a hash of the definition of what True is can both be treated the same by Annah's compiler.
    (Not to be confused with my sister, Anna, or part of the Debian Installer with the same name that I wrote long ago.)

So, how could these be combined together, and what might the result look like?

Well, I could start by posting a Scuttlebutt message that defines what True is. And another Scuttlebutt message defining False. And then, another Scuttlebutt message to define the AND function, which would link to my messages for True and False. Continue this until I've built up enough Annah code to write some almost useful programs.

Annah can't do any IO on its own (though it can model IO similarly to how Haskell does), so for programs to be actually useful, there needs to be Scuttlebutt client support. The way typing works in Annah, a program's type can be expressed as a Scuttlebutt link. So a Scuttlebutt client that wants to run Annah programs of a particular type can pick out programs that link to that type, and will know what type of data the program consumes and produces.

Here are a few ideas of what could be built, with fairly simple client-side support for different types of Annah programs...

  • Shared dashboards. Boats in a flotilla are communicating via Scuttlebutt, and want to share a map of their planned courses. Coders collaborating via Scuttlebutt want to see an overview of the state of their project.

    For this, the Scuttlebutt client needs a way to run a selected Annah program of type Dashboard, and display its output like a Scuttlebutt message, in a dashboard window. The dashboard message gets updated whenever other Scuttlebutt messages come in. The Annah program picks out the messages it's interested in, and generates the dashboard message.

    So, send a message updating your boat's position, and everyone sees it update on the map. Send a message with updated weather forecasts as they're received, and everyone can see the storm developing. Send another message updating a waypoint to avoid the storm, and steady as you go...

    The coders, meanwhile, probably tweak their dashboard's code every day. As they add git-ssb repos, they make the dashboard display an overview of their bugs. They get CI systems hooked in and feeding messages to Scuttlebutt, and make the dashboard go green or red. They make the dashboard A-B test itself to pick the right shade of red. And so on...

    The dashboard program is stored in Scuttlebutt so everyone is on the same page, and the most recent version of it posted by a team member gets used. (Just have the old version of the program notice when there's a newer version, and run that one..)

    (Also could be used in disaster response scenarios, where the data and visualization tools get built up on the fly in response to local needs, and are shared peer-to-peer in areas without internet.)

  • Smart hyperlinks. When a hyperlink in a Scuttlebutt message points to a Annah program, optionally with some Annah data, clicking on it can run the program and display the messages that the program generates.

    This is the most basic way a Scuttlebutt client could support Annah programs, and it could be used for tons of stuff. A few examples:

    • Hiding spoilers. Click on the link and it'll display a spoiler about a book/movie.
    • A link to whatever I was talking about one year ago today. That opens different messages as time goes by. Put it in your Scuttlebutt profile or something. (Requires a way for Annah to get the current date, which it normally has no way of accessing.)
    • Choose your own adventure or twine style games. Click on the link and the program starts the game, displaying links to choose between, and so on.
    • Links to custom views. For example, a link could lead to a combination of messages from several different, related channels. Or could filter messages in some way.
  • Collaborative filtering. Suppose I don't want to see frog-related memes in my Scuttlebutt client. I can write a Annah program that calculates a message's frogginess, and outputs a Filtered Message. It can leave a message unchanged, or filter it out, or perhaps minimize its display. I publish the Annah program on my feed, and tell my Scuttlebutt client to filter all messages through it before displaying them to me.

    I published the program in my Scuttlebutt feed, and so my friends can use it too. They can build other filtering functions for other stuff (such an an excess of orange in photos), and integrate my frog filter into their filter program by simply composing the two.

    If I like their filter, I can switch my client to using it. Or not. Filtering is thus subjective, like Scuttlebutt, and the subjectivity is expressed by picking the filter you want to use, or developing a better one.

  • Wiki pages. Scuttlebutt is built on immutable append-only logs; it doesn't have editable wiki pages. But they can be built on top using Annah.

    A smart link to a wiki page is a reference to the Annah program that renders it. Of course being a wiki, there will be more smart links on the wiki page going to other wiki pages, and so on.

    The wiki page includes a smart link to edit it. The editor needs basic form support in the Scuttlebutt client; when the edited wiki page is posted, the Annah program diffs it against the previous version and generates an Edit which gets posted to the user's feed. Rendering the page is just a matter of finding the Edit messages for it from people who are allowed to edit it, and combining them.

    Anyone can fork a wiki page by posting an Edit to their feed. And can then post a smart link to their fork of the page.

    And anyone can merge other forks into their wiki page (this posts a control message that makes the Annah program implementing the wiki accept those forks' Edit messages). Or grant other users permission to edit the wiki page (another control message). Or grant other users permissions to grant other users permissions.

    There are lots of different ways you might want your wiki to work. No one wiki implementation, but lots of Annah programs. Others can interact with your wiki using the program you picked, or fork it and even switch the program used. Subjectivity again.

  • User-defined board games. The Scuttlebutt client finds Scuttlebutt messages containing Annah programs of type Game, and generates a tab with a list of available games.

    The players of a particular game all experience the same game interface, because the code for it is part of their shared Scuttlebutt message pool, and the code to use gets agreed on at the start of a game.

    To play a game, the Scuttlebutt client runs the Annah program, which generates a description of the current contents of the game board.

    So, for chess, use Annah to define a ChessMove data type, and the Annah program takes the feeds of the two players, looks for messages containing a ChessMove, and builds up a description of the chess board.

    As well as the pieces on the game board, the game board description includes Annah functions that get called when the user moves a game piece. That generates a new ChessMove which gets recorded in the user's Scuttlebutt feed.

    This could support a wide variety of board games. If you don't mind the possibility that your opponent might cheat by peeking at the random seed, even games involving things like random card shuffles and dice rolls could be built. Also there can be games like Core Wars where the gamers themselves write Annah programs to run inside the game.

    Variants of games can be developed by modifying and reusing game programs. For example, timed chess is just the chess program with an added check on move time, and time clock display.

  • Decentralized chat bots. Chat bots are all the rage (or were a few months ago, tech fads move fast), but in a decentralized system like Scuttlebutt, a bot running on a server somewhere would be a ugly point of centralization. Instead, write a Annah program for the bot.

    To launch the bot, publish a message in your own personal Scuttlebutt feed that contains the bot's program, and a nonce.

    The user's Scuttlebutt client takes care of the rest. It looks for messages with bot programs, and runs the bot's program. This generates or updates a Scuttlebutt message feed for the bot.

    The bot's program signs the messages in its feed using a private key that's generated by combining the user's public key, and the bot's nonce. So, the bot has one feed per user it talks to, with deterministic content, which avoids a problem with forking a Scuttlebutt feed.

    The bot-generated messages can be stored in the Scuttlebutt database like any other messages and replicated around. The bot appears as if it were a Scuttlebutt user. But you can have conversations with it while you're offline.

    (The careful reader may have noticed that deeply private messages sent to the bot can be decrypted by anyone! This bot thing is probably a bad idea really, but maybe the bot fad is over anyway. We can only hope. It's important that there be at least one bad idea in this list..)

This kind of extensibility in a peer-to-peer system is exciting! With these new systems, we can consider lessons from the world wide web and replicate some of the good parts, while avoiding the bad. Javascript has been both good and bad for the web. The extensibility is great, and yet it's a neverending security and privacy nightmare, and it ties web pages ever more tightly to programs hidden away on servers. I believe that Annah combined with Scuttlebutt will comprehensively avoid those problems. Shall we build it?


This exploration was sponsored by Jake Vosloo on Patreon.

18 October, 2017 07:31PM

hackergotchi for Michal Čihař

Michal Čihař

Gammu 1.38.5

Today, Gammu 1.38.5 has been released. After long period of bugfix only releases, this comes with several new noteworthy features.

The biggest feature probably is that SMSD can now handle USSD messages as well. Those are usually used for things like checking remaining credit, but it's certainly not limited to this. This feature has been contributed thanks to funding on BountySource.

You can read more information in the release announcement.

Filed under: Debian English Gammu

18 October, 2017 10:00AM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Introducing Narabu, part 1: Introduction

Narabu is a new intraframe video codec, from the Japanese verb narabu (並ぶ), which means to line up or be parallel.

Let me first state straight up that Narabu isn't where I hoped it would be at this stage; the encoder isn't fast enough, and I have to turn my attention to other projects for a while. Nevertheless, I think it is interesting as a research project in its own right, and I don't think it should stop me from trying to write up a small series. :-)

In the spirit of Leslie Lamport, I'll be starting off with describing what problem I was trying to solve, which will hopefully make the design decisions a lot clearer. Subsequent posts will dive into background information and then finally Narabu itself.

I want a codec to send signals between different instances of Nageru, my free software video mixer, and also longer-term between other software, such as recording or playout. The reason is pretty obvious for any sort of complex configuration; if you are doing e.g. both a stream mix and a bigscreen mix, they will naturally want to use many of the same sources, and sharing them over a single GigE connection might be easier than getting SDI repeaters/splitters, especially when you have a lot of them. (Also, in some cases, you might want to share synthetic signals, such as graphics, that never existed on SDI in the first place.)

This naturally leads to the following demands:

  • Intraframe-only; every frame must be compressed independently. (This isn't strictly needed for all use cases, but is much more flexible, and common in any kind of broadcast.)
  • Need to handle 4:2:2 color, since that's what most capture sources give out, and we want to transmit the raw signals as much as possible. Fairly flexible in input resolution (divisible by 16 is okay, limited to only a given set of resolutions is not).
  • 720p60 video in less than one CPU core (ideally much less); the CPU can already pretty be busy with other work, like x264 encoding of the finished stream, and sharing four more inputs at the same time is pretty common. What matters is mostly a single encode+decode cycle, so fast decode doesn't help if the encoder is too slow.
  • Target bitrates around 100-150 Mbit/sec, at similar quality to MJPEG (ie. 45 dB PSNR for most content). Multiple signals should fit into a normal GigE link at the same time, although getting it to work over 802.11 isn't a big priority.
  • Both encoder and decoder robust to corrupted or malicious data; a dropped frame is fine, a crash is not.
  • Does not depend on uncommon or expensive hardware, or GPUs from a specific manufacturer.
  • GPLv3-compatible implementation. I already link to GPLv3 software, so I don't have a choice here; I cannot link to something non-free (and no antics with dlopen(), please).

There's a bunch of intraframe formats around. The most obvious thing to do would be to use Intel Quick Sync to produce H.264 (intraframe H.264 blows basically everything else out of the sky in terms of PSNR, and QSV hardly uses any power at all), but sadly, that's limited to 4:2:0. I thought about encoding the three color planes as three different monochrome streams, but monochrome is not supported either.

Then there's a host of software solutions. x264 can do 4:2:2, but even on ultrafast, it gobbles up an entire core or more at 720p60 at the target bitrates (mostly in entropy coding). FFmpeg has implementations of all kinds of other codecs, like DNxHD, CineForm, MJPEG and so on, but they all use much more CPU for encoding than the target. NDI would seem to fit the bill exactly, but fails the licensing check, and also isn't robust to corrupted or malicious data. (That, and their claims about video quality are dramatically overblown for any kinds of real video data I've tried.)

So, sadly, this leaves only really one choice, namely rolling my own. I quickly figured I couldn't beat the world on CPU video codec speed, and didn't really want to spend my life optimizing AVX2 DCTs anyway, so again, the GPU will come to our rescue in the form of compute shaders. (There are some other GPU codecs out there, but all that I've found depend on CUDA, so they are NVIDIA-only, which I'm not prepared to commit to.) Of course, the GPU is quite busy in Nageru, but if one can make an efficient enough codec that one stream can work at only 5% or so of the GPU (meaning 1200 fps or so), it wouldn't really make a dent. (As a spoiler, the current Narabu encoder isn't there for 720p60 on my GTX 950, but the decoder is.)

In the next post, we'll look a bit at the GPU programming model, and what it means for how our codec needs to look like on the design level.

18 October, 2017 08:25AM

hackergotchi for Norbert Preining

Norbert Preining

Kobo firmware 4.6.9995 mega update (KSM, nickel patch, ssh, fonts)

It has been ages that I haven’t updated the MegaUpdate package for Kobo. Now that a new and seemingly rather bug-free and quick firmware release (4.6.9995) has been released, I finally took the time to update the whole package to the latest releases of all the included items. The update includes all my favorite patches and features: Kobo Start Menu, koreader, coolreader, pbchess, ssh access, custom dictionaries, and some side-loaded fonts.

Kobo Logo

So what are all these items:

  • firmware (thread): the basic software of the device, shipped by Kobo company
  • Metazoa firmware patches (thread): fix some layout options and functionalities, see below for details.
  • Kobo Start Menu (V08, update 5b thread): a menu that pops up before the reading software (nickel) starts, which allows to start alternative readers (like koreader) etc.
  • KOreader (koreader-nightly-20171004, thread): an alternative document reader that supports epub, azw, pdf, djvu and many more
  • pbchess and CoolReader (2017.10.14, thread): a chess program and another alternative reader, bundled together with several other games
  • kobohack (web site): I only use the ssh server
  • ssh access (old post: makes a full computer from your device by allowing you to log into it via ssh
  • custom dictionaries (thread): this fix updates dictionaries from the folder customdicts to the Kobo dictionary folder. For creating your own Japanese-English dictionary, see this blog entry
  • side-loaded fonts: GentiumBasic and GentiumBookBasic, Verdana, DroidSerif, and Charter-eInk

Install procedure

Download

Mark6 – Kobo GloHD

firmware: Kobo 4.6.9995 for GloHD

Mega update: Kobo-4.6.9995-combined/Mark6/KoboRoot.tgz

Mark5 – Aura

firmware: Kobo 4.6.9995 for Aura

Mega update: Kobo-4.6.9995-combined/Mark5/KoboRoot.tgz

Mark4 – Kobo Glo, Aura HD

firmware: Kobo 4.6.9995 for Glo and AuraHD

Mega update: Kobo-4.6.9995-combined/Mark4/KoboRoot.tgz

Latest firmware

Warning: Sideloading or crossloading the incorrect firmware can break/brick your device. The link below is for Kobo GloHD ONLY.

The first step is to update the Kobo to the latest firmware. This can easily be done by just getting the latest firmware from the links above and unpacking the zip file into the .kobo directory on your device. Eject and enjoy the updating procedure.

Mega update

Get the combined KoboRoot.tgz for your device from the links above and put it into the .kobo directory, then eject and enjoy the updating procedure again.

After this the device should reboot and you will be kicked into KSM, from where after some time of waiting Nickel will be started. If you consider the fonts too small, select Configure, then the General, and add item, then select kobomenuFontsize=55 and save.

Remarks to some of the items included

The full list of included things is above, here are only some notes about what specific I have done.

  • Metazoa firmware patches

    Included patches from the Metazoa firmware patches:

    Custom left & right margins
    Fix three KePub fullScreenReading bugs
    Change dicthtml strings to micthtml
    Default ePub monospace font (Courier)
    Custom reading footer style
    Dictionary pop-up frame size increase
    Increase The Cover Size In Library
    Increasing The View Details Container
    New home screen increasing cover size
    Reading stats/Author name cut when the series is showing bug fix 
    New home screen subtitle custom font
    Custom font to Collection and Authors names
    

    If you need/want different patches, you need to do the patching yourself.

  • kobohack-h

    Kobohack (latest version 20150110) originally provided updated libraries and optimizations, but unfortunately it is now completely outdated and using it is not recommended for the library part. I only include the ssh server (dropbear) so that connections to the Kobo via ssh.

  • ssh fixes

    See the detailed instructions here, the necessary files are already included in the mega upload. It updates the /etc/inittab to run also /etc/init.d/rcS2, and this one again starts the inetd server and run user supplied commands in /mnt/onboard/run.sh which is where your documents are.

  • Custom dictionaries

    The necessary directories and scripts are already included in the above combined KoboRoot.tgz, so nothing to be done but dropping updated, fixed, changed dictionaries into your Kobo root into the directory customdict. After this you need to reboot to get the actual dictionaries updated. See this thread for more information. The adaptions and script mentioned in this post are included in the mega update.

WARNINGS

If this is the first time you install this patch, you to fix the password for root and disable telnet. This is an important step, here are the steps you have to take (taken from this old post):

  1. Turn on Wifi on the Kobo and find IP address
    Go to Settings – Connect and after this is done, go to Settings – Device Information where you will see something like
    IP Address: 192.168.1.NN

    (numbers change!)
  2. telnet into your device
    telnet 192.168.1.NN
    it will ask you the user name, enter “root” (without the quotes) and no password
  3. (ON THE GLO) change home directory of root
    edit /etc/passwd with vi and change the entry for root by changing the 6th field from: “/” to “/root” (without the quotes). After this procedure the line should look like
    root::0:0:root:/root:/bin/sh
    don’t forget to save the file
  4. (ON THE GLO) create ssh keys for dropbear
    [root@(none) ~]# mkdir /etc/dropbear
    [root@(none) ~]# cd /etc/dropbear
    [root@(none) ~]# dropbearkey -t dss -f dropbear_dss_host_key
    [root@(none) ~]# dropbearkey -t rsa -f dropbear_rsa_host_key
  5. (ON YOUR PERSONAL COMPUTER) check that you can log in with ssh
    ssh root@192.168.1.NN
    You should get dropped into your device again
  6. (ON THE GLO) log out of the telnet session (the first one you did)
    [root@(none) ~]# exit
  7. (ON THE GLO) in your ssh session, change the password of root
    [root@(none) ~]# passwd
    you will have to enter the new password two times. Remember it well, you will not be easily able to recover it without opening your device.
  8. (ON THE GLO) disable telnet login
    edit the file /etc/inetd.conf.local on the GLO (using vi) and remove the telnet line (the line starting with 23).
  9. restart your device

The combined KoboRoot.tgz is provided without warranty. If you need to reset your device, don’t blame me!

18 October, 2017 12:54AM by Norbert Preining

October 17, 2017

hackergotchi for Sune Vuorela

Sune Vuorela

KDE still makes Qt

A couple of years ago, I made a blog post, KDE makes Qt, with data about which percentage of Qt contributions came from people starting in KDE. Basically, how many Qt contributions are made by people who used KDE as a “gateway” drug into it.

I have now updated the graphs with data until the end of September 2017:

KDE still makes Qt

Many of these changes are made by people not directly as a result of their KDE work, but as a result of their paid work. But this doesn’t change the fact that KDE is an important project for attracting contributors to Qt, and a very good place to find experienced Qt developers.

17 October, 2017 08:21PM by Sune Vuorela

Reproducible builds folks

Reproducible Builds: Weekly report #129

Here's what happened in the Reproducible Builds effort between Sunday October 8 and Saturday October 14 2017:

Upcoming events

  • On Saturday 21st October, Holger Levsen will present at All Systems Go! in Berlin, Germany on reproducible builds.

  • On Tuesday 24th October, Chris Lamb will present at All Things Open 2017 in Raleigh, NC, USA on reproducible builds.

  • On Wednesday 25th October, Holger Levsen will present at the Open Source Summit Europe in Prague, Czech Republic on reproducible builds.

  • From October 31st - November 2nd we will be holding the 3rd Reproducible Builds summit in Berlin. If you are working in the field of reproducible builds, you should totally be there. Please contact us if you have any questions! Quoting from the public invitation mail:

    These dates are inclusive, ie. the summit will be 3 full days from "9 to 5".
    Best arrive on Monday October 30th and leave on the evening of Thursday, 3rd
    at the earliest.
    
    
    Meeting content
    ===============
    
    The exact content of the meeting is going to be shaped by the
    participants, but here are the main goals:
    
     - Update & exchange about the status of reproducible builds in various
       projects.
     - Establish spaces for more strategic and long-term thinking than is possible
       in virtual channels.
     - Improve collaboration both between and inside projects.
     - Expand the scope and reach of reproducible builds to more projects.
     - Brainstorming / Designing several things, eg:
      - designing tools enabling end-users to get the most benefits from
        reproducible builds.
      - design of back-ends needed for that.
     - Work together and hack on solutions.
    
    There will be a huge variety of topics to be discussed. To give a few
    examples:
    - continuing design and development work on .buildinfo infrastructure
    - build-path issues everywhere
    - future directions for diffoscope, reprotest & strip-nondeterminism
    - reproducing signed artifacts such as RPMs
    - discussing formats and tools we can share
    - sharing proposals for standards and documentation helpful to spreading the
      reproducible effort
    - and many many more.
    
    Please think about what you want discuss, brainstorm & learn about at this
    meeting!
    
    
    Schedule
    ========
    
    Preliminary schedule for the three days:
    
    9:00 Welcome and breakfast
    9:30 Meeting starts
    12:30 Lunch
    17:00 End of the official schedule
    
    Gunner and Beatrice from Aspiration will help running the meeting. We will
    collect your input in subsequent emails to make the best of everyone's time.
    Feel free to start thinking about what you want to achieve there. We will also
    adjust topics as the meeting goes.
    
    Please note that we are very likely to spend large parts of the meeting away
    from laptops and closer to post-it notes. So make sure you've answered any
    critical emails *before* Tuesday morning! :)
    

Reproducible work in other projects

Pierre Pronchery reported that that he has built the foundations for doing more reproducibility work in NetBSD.

Packages fixed

Upstream bugs and patches:

  • Bernhard M. Wiedemann:
    • qutim used RANDOM which is unpredictable and unreproducible.
    • dpdk used locale-dependent sort.

Reproducibility non-maintainer uploads in Debian:

QA fixes in Debian:

Reviews of unreproducible packages

6 package reviews have been added, 30 have been updated and 37 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (40)
  • Eric Valette (1)
  • Markus Koschany (1)

diffoscope development

  • Ximin Luo:
    • Containers: diff the metadata of containers in one central location in the code, so that deep-diff works between all combinations of different container types. This lets us finally close #797759.
    • Tests: add a complete set of cases to test all pairs of container types.
  • Chris Lamb:
    • Temporarily skip the test for ps2ascii(1) in ghostscript > 9.21 which now outputs text in a slightly different format.
    • UI wording improvements.

reprotest development

Version 0.7.3 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks, as well as new ones:

  • Ximin Luo:
    • Add a --env-build option for testing builds under different sets of environment variables. This is meant to help the discussion over at #876055 about how we should deal with different types of environment variables in a stricter definition of reproducibility.
    • UI and logging tweaks and improvements.
    • Simplify the _shell_ast module and merge it into shell_syn.

Misc.

This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

17 October, 2017 07:29PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Electric Dreams

No spoilers, for those who have yet to watch it...

Channel 4 have been broadcasting a new 10-part series called Electric Dreams, based on some of the short fiction of Philip K Dick. The series was commissioned after Channel 4 lost Black Mirror to Netflix, perhaps to try and find something tonally similar. Electric Dreams is executive-produced by Brian Cranston, who also stars in one of the episodes yet to broadcast.

I've read all of PKD's short fiction1 but it was a long time ago so I have mostly forgotten the stories upon which the series is based. I've quite enjoyed going back and re-reading them after watching the corresponding episodes to see what changes they've made. In some cases the changes are subtle or complementary, in other cases they've whittled the original story right out and installed a new one inside the shell. A companion compilation has been published with just the relevant short stories in it, and from what I've seen browsing it in a book shop it also contains short introductions which might be worth a read.

Things started strong with The Hood Maker, which my wife also enjoyed, although she was disappointed to realise we wouldn't be revisiting those characters in the future. The world-building was strong enough that it seemed like a waste for a single episode.

My favourite episode of those broadcast so far was The Commuter, starring Timothy Spall. The changes made were complementary and immensely expanded the emotional range of the story. In some ways, a key aspect of the original story was completely inverted, which I found quite funny: my original take on Dick's story was Dick implying a particular outcome was horrific, whereas it becomes desirable in the TV episode.

Episode 4, *Crazy Diamond*

Episode 4, Crazy Diamond

One of the stories most hollowed-out was Sales Pitch which was the basis for Tony Grisoni’s episode Crazy Diamond, starring Steve Buscemi and Sidse Babett Knudsen. Buscemi was good but Knudsen totally stole every frame she was in. Fans of the cancelled Channel 4 show Utopia should enjoy this one: both were directed by Marc Munden and the directing, photography and colour balance really recall it.

The last episode broadcast was Real Life directed by Ronald D Moore of Battlestar Galactica reboot fame and starring Anna Paquin. Like Sales Pitch it bears very little resemblance to the original story. It played around with similar ideas explored in a lot of Sci-Fi movies and TV shows but left me a little flat; I didn't think it contributed much that I hadn't seen before. I was disappointed that there was a relatively conclusive ending. There was a subversive humour in the Dick short that was completely lost in the retelling. The world design seemed pretty generic.

I'm looking forward to Autofac, which is one of the shorts I can remember particularly enjoying.


  1. as collected in the 5 volumes of The Collected Stories of Philip K Dick, although I don't doubt there are some stragglers that were missed out when that series was compiled. ↩

17 October, 2017 11:33AM

Russ Allbery

Bundle haul

Confession time: I started making these posts (eons ago) because a close friend did as well, and I enjoyed reading them. But the main reason why I continue is because the primary way I have to keep track of the books I've bought and avoid duplicates is, well, grep on these posts.

I should come up with a non-bullshit way of doing this, but time to do more elegant things is in short supply, and, well, it's my blog. So I'm boring all of you who read this in various places with my internal bookkeeping. I do try to at least add a bit of commentary.

This one will be more tedious than most since it includes five separate Humble Bundles, which increases the volume a lot. (I just realized I'd forgotten to record those purchases from the past several months.)

First, the individual books I bought directly:

Ilona Andrews — Sweep in Peace (sff)
Ilona Andrews — One Fell Sweep (sff)
Steven Brust — Vallista (sff)
Nicky Drayden — The Prey of Gods (sff)
Meg Elison — The Book of the Unnamed Midwife (sff)
Pat Green — Night Moves (nonfiction)
Ann Leckie — Provenance (sff)
Seanan McGuire — Once Broken Faith (sff)
Seanan McGuire — The Brightest Fell (sff)
K. Arsenault Rivera — The Tiger's Daughter (sff)
Matthew Walker — Why We Sleep (nonfiction)

Some new books by favorite authors, a few new releases I heard good things about, and two (Night Moves and Why We Sleep) from references in on-line articles that impressed me.

The books from security bundles (this is mostly work reading, assuming I'll get to any of it), including a blockchain bundle:

Wil Allsop — Unauthorised Access (nonfiction)
Ross Anderson — Security Engineering (nonfiction)
Chris Anley, et al. — The Shellcoder's Handbook (nonfiction)
Conrad Barsky & Chris Wilmer — Bitcoin for the Befuddled (nonfiction)
Imran Bashir — Mastering Blockchain (nonfiction)
Richard Bejtlich — The Practice of Network Security (nonfiction)
Kariappa Bheemaiah — The Blockchain Alternative (nonfiction)
Violet Blue — Smart Girl's Guide to Privacy (nonfiction)
Richard Caetano — Learning Bitcoin (nonfiction)
Nick Cano — Game Hacking (nonfiction)
Bruce Dang, et al. — Practical Reverse Engineering (nonfiction)
Chris Dannen — Introducing Ethereum and Solidity (nonfiction)
Daniel Drescher — Blockchain Basics (nonfiction)
Chris Eagle — The IDA Pro Book, 2nd Edition (nonfiction)
Nikolay Elenkov — Android Security Internals (nonfiction)
Jon Erickson — Hacking, 2nd Edition (nonfiction)
Pedro Franco — Understanding Bitcoin (nonfiction)
Christopher Hadnagy — Social Engineering (nonfiction)
Peter N.M. Hansteen — The Book of PF (nonfiction)
Brian Kelly — The Bitcoin Big Bang (nonfiction)
David Kennedy, et al. — Metasploit (nonfiction)
Manul Laphroaig (ed.) — PoC || GTFO (nonfiction)
Michael Hale Ligh, et al. — The Art of Memory Forensics (nonfiction)
Michael Hale Ligh, et al. — Malware Analyst's Cookbook (nonfiction)
Michael W. Lucas — Absolute OpenBSD, 2nd Edition (nonfiction)
Bruce Nikkel — Practical Forensic Imaging (nonfiction)
Sean-Philip Oriyano — CEHv9 (nonfiction)
Kevin D. Mitnick — The Art of Deception (nonfiction)
Narayan Prusty — Building Blockchain Projects (nonfiction)
Prypto — Bitcoin for Dummies (nonfiction)
Chris Sanders — Practical Packet Analysis, 3rd Edition (nonfiction)
Bruce Schneier — Applied Cryptography (nonfiction)
Adam Shostack — Threat Modeling (nonfiction)
Craig Smith — The Car Hacker's Handbook (nonfiction)
Dafydd Stuttard & Marcus Pinto — The Web Application Hacker's Handbook (nonfiction)
Albert Szmigielski — Bitcoin Essentials (nonfiction)
David Thiel — iOS Application Security (nonfiction)
Georgia Weidman — Penetration Testing (nonfiction)

Finally, the two SF bundles:

Buzz Aldrin & John Barnes — Encounter with Tiber (sff)
Poul Anderson — Orion Shall Rise (sff)
Greg Bear — The Forge of God (sff)
Octavia E. Butler — Dawn (sff)
William C. Dietz — Steelheart (sff)
J.L. Doty — A Choice of Treasons (sff)
Harlan Ellison — The City on the Edge of Forever (sff)
Toh Enjoe — Self-Reference ENGINE (sff)
David Feintuch — Midshipman's Hope (sff)
Alan Dean Foster — Icerigger (sff)
Alan Dean Foster — Mission to Moulokin (sff)
Alan Dean Foster — The Deluge Drivers (sff)
Taiyo Fujii — Orbital Cloud (sff)
Hideo Furukawa — Belka, Why Don't You Bark? (sff)
Haikasoru (ed.) — Saiensu Fikushon 2016 (sff anthology)
Joe Haldeman — All My Sins Remembered (sff)
Jyouji Hayashi — The Ouroboros Wave (sff)
Sergei Lukyanenko — The Genome (sff)
Chohei Kambayashi — Good Luck, Yukikaze (sff)
Chohei Kambayashi — Yukikaze (sff)
Sakyo Komatsu — Virus (sff)
Miyuki Miyabe — The Book of Heroes (sff)
Kazuki Sakuraba — Red Girls (sff)
Robert Silverberg — Across a Billion Years (sff)
Allen Steele — Orbital Decay (sff)
Bruce Sterling — Schismatrix Plus (sff)
Michael Swanwick — Vacuum Flowers (sff)
Yoshiki Tanaka — Legend of the Galactic Heroes, Volume 1: Dawn (sff)
Yoshiki Tanaka — Legend of the Galactic Heroes, Volume 2: Ambition (sff)
Yoshiki Tanaka — Legend of the Galactic Heroes, Volume 3: Endurance (sff)
Tow Ubukata — Mardock Scramble (sff)
Sayuri Ueda — The Cage of Zeus (sff)
Sean Williams & Shane Dix — Echoes of Earth (sff)
Hiroshi Yamamoto — MM9 (sff)
Timothy Zahn — Blackcollar (sff)

Phew. Okay, all caught up, and hopefully won't have to dump something like this again in the near future. Also, more books than I have any actual time to read, but what else is new.

17 October, 2017 05:38AM

hackergotchi for Norbert Preining

Norbert Preining

Japanese TeX User Meeting 2017

Last saturday the Japanese TeX User Meeting took place in Fujisawa, Kanagawa. For those who have been at the TUG 2013 in Tokyo you will remember that the Japanese TeX community is quite big and vibrant. On Saturday about 50 users and developers gathered for a set of talks on a variety of topics.

The first talk was by Keiichiro Shikano (鹿野 桂一郎) on using Markup text to generate (La)TeX and HTML. He presented a variety of markup formats, including his own tool xml2tex.

The second talk was my Masamichi Hosoda (細田 真道) on reducing the size of PDF files using PDFmark extraction. As a contributor to many projects including Texinfo and LilyPond, Masamichi Hosoda tells us horror stories about multiple font embedding in the manual of LilyPond, the permanent need for adaption to newer Ghostscript versions, and the very recent development in Ghostscript prohibiting the merge of font definitions in PDF files.

Next up was Yusuke Terada (寺田 侑祐) on grading exams using TeX. Working through hundreds and hundreds of exams and do the grading is something many of us are used to and I think nobody really enjoys it. Yusuke Terada has combined various tools, including scans, pdf merging using pdfpages, to generate gradable PDF which were then checked on an iPad. On the way he did hit some limits in dvipdfmx on the number of images, but this was obviously only a small bump on the road. Now if that could be automatized as a nice application, it would be a big hit I guess!

The forth talk was by Satoshi Yamashita (山下 哲) on the preparation of slides using KETpic. KETpic is a long running project by Setsuo Takato (高遠節夫) for the generation of graphics, in particular using Cinderella. KETpic and KETcindy integrates with lots of algebraic and statistical programs (R, Maxima, SciLab, …) and has a long history of development. Currently there are activities to incorporate it into TeX Live.

The fifth talk was by Takuto Asakura (朝倉 卓人) on programming TeX using expl3, the main building block of the LaTeX3 project and already adopted by many TeX developers. Takuto Asakura came to fame on this years TUG/BachoTeX 2017 when he won the W. J. Martin Prize for his presentation Implementing bioinformatics algorithms in TeX. I think we can expect great new developments from Takuto!

The last talk was by myself on fmtutil and updmap, two of the main management programs in any TeX installation, presenting the changes introduced over the last year, including the most recent release of TeX Live. Details have been posted on my blog, and a lengthy article in TUGboat 38:2, 2017 is available on this topic, too.

After the conference about half of the participants joined a social dinner in a nearby Izakaya, followed by a after-dinner beer tasting at a local craft beer place. Thanks to Tatsuyoshi Hamada for the organization.

As usual, the Japanese TeX User Meetings are a great opportunity to discuss new features and make new friends. I am always grateful to be part of this very nice community! I am looking forward to the next year’s meeting.

17 October, 2017 05:22AM by Norbert Preining

François Marier

Checking Your Passwords Against the Have I Been Pwned List

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

17 October, 2017 05:10AM

October 16, 2017

hackergotchi for Gustavo Noronha Silva

Gustavo Noronha Silva

Who knew we still had low-hanging fruits?

Earlier this month I had the pleasure of attending the Web Engines Hackfest, hosted by Igalia at their offices in A Coruña, and also sponsored by my employer, Collabora, Google and Mozilla. It has grown a lot and we had many new people this year.

Fun fact: I am one of the 3 or 4 people who have attended all of the editions of the hackfest since its inception in 2009, when it was called WebKitGTK+ hackfest \o/

20171002_204405

It was a great get together where I met many friends and made some new ones. Had plenty of discussions, mainly with Antonio Gomes and Google’s Robert Kroeger, about the way forward for Chromium on Wayland.

We had the opportunity of explaining how we at Collabora cooperated with igalians to implemented and optimise a Wayland nested compositor for WebKit2 to share buffers between processes in an efficient way even on broken drivers. Most of the discussions and some of the work that led to this was done in previous hackfests, by the way!

20171002_193518

The idea seems to have been mostly welcomed, the only concern being that Wayland’s interfaces would need to be tested for security (fuzzed). So we may end up going that same route with Chromium for allowing process separation between the UI and GPU (being renamed Viz, currently) processes.

On another note, and going back to the title of the post, at Collabora we have recently adopted Mattermost to replace our internal IRC server. Many Collaborans have decided to use Mattermost through an Epiphany Web Application or through a simple Python application that just shows a GTK+ window wrapping a WebKitGTK+ WebView.

20171002_101952

Some people noticed that when the connection was lost Mattermost would take a very long time to notice and reconnect – its web sockets were taking a long, long time to timeout, according to our colleague Andrew Shadura.

I did some quick searching on the codebase and noticed WebCore has a NetworkStateNotifier interface that it uses to get notified when connection changes. That was not implemented for WebKitGTK+, so it was likely what caused stuff to linger when a connection hiccup happened. Given we have GNetworkMonitor, implementation of the missing interfaces required only 3 lines of actual code (plus the necessary boilerplate)!

screenshot-from-2017-10-16-11-13-39

I was surprised to still find such as low hanging fruit in WebKitGTK+, so I decided to look for more. Turns out WebCore also has a notifier for low power situations, which was implemented only by the iOS port, and causes the engine to throttle some timers and avoid some expensive checks it would do in normal situations. This required a few more lines to implement using upower-glib, but not that many either!

That was the fun I had during the hackfest in terms of coding. Mostly I had fun just lurking in break out sessions discussing the past, present and future of tech such as WebRTC, Servo, Rust, WebKit, Chromium, WebVR, and more. I also beat a few challengers in Street Fighter 2, as usual.

I’d like to say thanks to Collabora, Igalia, Google, and Mozilla for sponsoring and attending the hackfest. Thanks to Igalia for hosting and to Collabora for sponsoring my attendance along with two other Collaborans. It was a great hackfest and I’m looking forward to the next one! See you in 2018 =)

16 October, 2017 06:23PM by kov

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

OpenPGP smartcard transition (part 1.5)

Following the news about the ROCA vulnerability (weak key generation in Infineon-based smartcards, more info here and here) I can confirm that the Almex smartcard I mentionned on my last post (which are Infineon based) are indeed vulnerable.

I've contacted Almex to have more details, but if you were interested in buying that smartcard, you might want to refrain for now.

It does *not* affect keys generated off-card and later injected (the process I use myself).

 

16 October, 2017 03:32PM by Yves-Alexis (corsac@debian.org)

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

No more no surprises

Debian has generally always had, as a rule, “sane defaults” and “no surprises”. This was completely shattered for me when Vim decided to hijack the mouse from my terminal and break all copy/paste functionality. This has occured since the release of Debian 9.

I expect for my terminal to behave consistently, and this is broken every time I log in to a Debian 9 system where I have not configured Vim to disable this functionality. I also see I’m not alone in this frustration.

To fix this, in your .vimrc:

if !has("gui_running")
  set mouse=
endif

(This will check to see if your using GVim or similar, where it would be reasonable to expect the mouse to work.)

This is perhaps not aggresive enough though. I never want to have console applications trying to use the mouse. I’ve configured rxvt to do things like open URLs in Firefox, etc. that I always want to work, and I always want my local clipboard to be used so I can copy/paste between remote machines.

I’ve found a small patch that would appear to disable mouse reporting for rxvt, but unfortunately I cannot do this through an Xresources option. If someone is looking for something to do for Hacktoberfest, I’d love to see this be an option for rxvt without re-compiling:

diff --git a/src/rxvt.h b/src/rxvt.h
index 5c7cf66..2751ba3 100644
--- a/src/rxvt.h
+++ b/src/rxvt.h
@@ -646,7 +646,7 @@ enum {
 #define PrivMode_ExtMouseRight  (1UL<<24) // xterm pseudo-utf-8, but works in non-utf-8-locales
 #define PrivMode_BlinkingCursor (1UL<<25)
 
-#define PrivMode_mouse_report   (PrivMode_MouseX10|PrivMode_MouseX11|PrivMode_MouseBtnEvent|PrivMode_MouseAnyEvent)
+#define PrivMode_mouse_report   0 /* (PrivMode_MouseX10|PrivMode_MouseX11|PrivMode_MouseBtnEvent|PrivMode_MouseAnyEvent) */
 
 #ifdef ALLOW_132_MODE
 # define PrivMode_Default (PrivMode_Autowrap|PrivMode_ShiftKeys|PrivMode_VisibleCursor|PrivMode_132OK)

16 October, 2017 08:00AM

Russ Allbery

Free software log (September 2017)

I said that I was going to start writing these regularly, so I'm going to stick to it, even when the results are rather underwhelming. One of the goals is to make the time for more free software work, and I do better at doing things that I record.

The only piece of free software work for September was that I made rra-c-util compile cleanly with the Clang static analyzer. This was fairly tedious work that mostly involved unconfusing the compiler or converting (semi-intentional) crashes into explicit asserts, but it unblocks using the Clang static analyzer as part of the automated test suite of my other projects that are downstream of rra-c-util.

One of the semantic changes I made was that the vector utilities in rra-c-util (which maintain a resizable array of strings) now always allocate room for at least one string pointer. This wastes a small amount of memory for empty vectors that are never used, but ensures that the strings struct member is always valid. This isn't, strictly speaking, a correctness fix, since all the checks were correct, but after some thought, I decided that humans might have the same problem that the static analyzer had. It's a lot easier to reason about a field that's never NULL. Similarly, the replacement function for a missing reallocarray now does an allocation of size 1 if given a size of 0, just to avoid edge case behavior. (I'm sure the behavior of a realloc with size 0 is defined somewhere in the C standard, but if I have to look it up, I'd rather not make a human reason about it.)

I started on, but didn't finish, making rra-c-util compile without Clang warnings (at least for a chosen set of warnings). By far the hardest problem here are the Clang warnings for comparisons between unsigned and signed integers. In theory, I like this warning, since it's the cause of a lot of very obscure bugs. In practice, gah does C ever do this all over the place, and it's incredibly painful to avoid. (One of the biggest offenders is write, which returns a ssize_t that you almost always want to compare against a size_t.) I did a bunch of mechanical work, but I now have a lot of bits of code like:

     if (status < 0)
         return;
    written = (size_t) status;
    if (written < avail)
        buffer->left += written;

which is ugly and unsatisfying. And I also have a ton of casts, such as with:

    buffer_resize(buffer, (size_t) st.st_size + used);

since st.st_size is an off_t, which may be signed. This is all deeply unsatisfying and ugly, and I think it makes the code moderately harder to read, but I do think the warning will potentially catch bugs and even security issues.

I'm still torn. Maybe I can find some nice macros or programming styles to avoid the worst of this problem. It definitely requires more thought, rather than just committing this huge mechanical change with lots of ugly code.

Mostly, this kind of nonsense makes me want to stop working on C code and go finish learning Rust....

Anyway, apart from work, the biggest thing I managed to do last month that was vaguely related to free software was upgrading my personal servers to stretch (finally). That mostly went okay; only a few things made it unnecessarily exciting.

The first was that one of my systems had a very tiny / partition that was too small to hold the downloaded debs for the upgrade, so I had to resize it (VM disk, partition, and file system), and that was a bit exciting because it has an old-style DOS partition table that isn't aligned (hmmm, which is probably why disk I/O is so slow on those VMs), so I had to use the obsolete fdisk -c=dos mode because I wasn't up for replacing the partition right then.

The second was that my first try at an upgrade died with a segfault during the libc6 postinst and then every executable segfaulted. A mild panic and a rescue disk later (and thirty minutes and a lot of swearing), I tracked the problem down to libc6-xen. Nothing in the dependency structure between jessie and stretch forces libc6-xen to be upgraded in lockstep or removed, but it's earlier in the search path. So ld.so gets upgraded, and then finds the old libc6 from the libc6-xen package, and the mismatch causes immediate segfaults. A chroot dpkg --purge from the rescue disk solved the problem as soon as I knew what was going on, but that was a stressful half-hour.

The third problem was something I should have known was going to be an issue: an old Perl program that does some internal stuff for one of the services I ran had a defined @array test that has been warning for eons and that I never fixed. That became a full syntax error with the most recent Perl, and then I fixed it incorrectly the first time and had a bunch of trouble tracking down what I'd broken. All sorted out now, and everything is happily running stretch. (ejabberd, which other folks had mentioned was a problem, went completely smoothly, although I suspect I now have too many of the plugin packages installed and should do a purging.)

16 October, 2017 04:47AM

hackergotchi for Norbert Preining

Norbert Preining

Fixing vim in Debian

I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn’t possible the usual way. All this despite having

  set mouse=

in my /etc/vim/vimrc.local. Finally I found out why, thanks to bug #864074 and fixed it.

The whole mess comes from the fact that, when there is no ~/.vimrc, vim loads defaults.vim after vimrc.local and thus overwriting several settings put in there.

There is a comment (I didn’t see, though) in /etc/vim/vimrc explaining this:

" Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc.
" This happens after /etc/vim/vimrc(.local) are loaded, so it will override
" any settings in these files.
" If you don't want that to happen, uncomment the below line to prevent
" defaults.vim from being loaded.
" let g:skip_defaults_vim = 1

I agree that this is a good way to setup vim on a normal installation of Vim, but the Debian package could do better. The problem is laid out clearly in the bug report: If there is no ~/.vimrc, settings in /etc/vim/vimrc.local are overwritten.

This is as counterintuitive as it can be in Debian – and I don’t know any other package that does it in a similar way.

Since the settings in defaults.vim are quite reasonable, I want to have them, but only fix a few of the items I disagree with, like the mouse. At the end what I did is the following in my /etc/vim/vimrc.local:

if filereadable("/usr/share/vim/vim80/defaults.vim")
  source /usr/share/vim/vim80/defaults.vim
endif
" now set the line that the defaults file is not reloaded afterwards!
let g:skip_defaults_vim = 1

" turn of mouse
set mouse=
" other override settings go here

There is probably a better way to get a generic load statement that does not depend on the Vim version, but for now I am fine with that.

16 October, 2017 01:18AM by Norbert Preining

October 15, 2017

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Free Software Efforts (2017W41)

Here’s my weekly report for week 41 of 2017. In this week I have explored some Java 8 features, looked at automatic updates in a few Linux distributions and decided that actually I don’t need swap anymore.

Debian

The issue that was preventing the migration of the Tasktools Packaging Team’s mailing list from Alioth to Savannah has now been resolved.

Ana’s chkservice package that I sponsored last week has been ACCEPTED into unstable and since MIGRATED to testing.

Tor Project

I have produced a patch for the Tor Project website to update links to the Onionoo documentation now this has moved (#23802 ). I’ve updated the Debian and Ubuntu relay configuration instructions to use systemctl instead of service where appropriate (#23048 ).

When a Tor relay is less than 2 years old, an alert will now appear on Atlas to link to the new relay lifecycle blog post (#23767 ). This should hopefully help new relay operators understand why their relay is not immediately fully loaded but instead it takes some time to ramp up.

I have gone through the tickets for Tor Cloud and did not find any tickets that contain any important information that would be useful to someone reviving the project. I have closed out these tickets and the Tor Cloud component no longer has any non-closed tickets (#7763, #8544, #8768, #9064, #9751, #10282, #10637, #11153, #11502, #13391, #14035, #14036, #14073, #15821 ).

I’ve continued to work on turning the Atlas application into an integrated part of Tor Metrics (#23518 ) and you can see some progress here.

Finally, I’ve continued hacking on a Twitter bot to tweet factoids about the public Tor network and you can now enjoy some JavaDoc documentation if you’d like to learn a little about its internals. I am still waiting for a git repository to be created (#23799 ) but will be publishing the sources shortly after that ticket is actioned.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

I’d like to thank Digital Ocean for providing me with futher credit for their platform to support my open source work.

I do not find it likely that I’ll be travelling to Cambridge for the miniDebConf as the train alone would be around £350 and hotel accomodation a further £600 (to include both me and Ana).

15 October, 2017 10:00PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live Manager: JSON output

With the development of TLCockpit continuing, I found the need for and easy exchange format between the TeX Live Manager tlmgr and frontend programs like TLCockpit. Thus, I have implemented JSON output for the tlmgr info command.

While the format is not 100% stable – I might change some thing – I consider it pretty settled. The output of tlmgr info --data json is a JSON array with JSON objects for each package requested (default is to list all).

[ TLPackageObj, TLPackageObj, ... ]

The structure of the JSON object TLPackageObj reflects the internal Perl hash. Guaranteed to be present keys are name (String) and avilable (Boolean). In case the package is available, there are the following further keys sorted by their type:

  • String type: name, shortdesc, longdesc, category, catalogue, containerchecksum, srccontainerchecksum, doccontainerchecksum
  • Number type: revision, runsize, docsize, srcsize, containersize, srccontainersize, doccontainersize
  • Boolean type: available, installed, relocated
  • Array type: runfiles (Strings), docfiles (Strings), srcfiles (Strings), executes (Strings), depends (Strings), postactions (Strings)
  • Object type:
    • binfiles: keys are architecture names, values are arrays of strings (list of binfiles)
    • binsize: keys are architecture names, values or numbers
    • docfiledata: keys are docfile names, values are objects with optional keys details and lang
    • cataloguedata: optional keys aare topics, version, license, ctan, date, values are all strings

A rather long example showing the output for the package latex, formatted with json_pp and having the list of files and the long description shortened:

[
   {
      "installed" : true,
      "doccontainerchecksum" : "5bdfea6b85c431a0af2abc8f8df160b297ad73f6a324ca88df990f01f24611c9ae80d2f6d12c7b3767308fbe3de3fca3d11664b923ea4080fb13fd056a1d0c3d",
      "docfiles" : [
         "texmf-dist/doc/latex/base/README.txt",
         ....
         "texmf-dist/doc/latex/base/webcomp.pdf"
      ],
      "containersize" : 163892,
      "depends" : [
         "luatex",
         "pdftex",
         "latexconfig",
         "latex-fonts"
      ],
      "runsize" : 414,
      "relocated" : false,
      "doccontainersize" : 12812184,
      "srcsize" : 752,
      "revision" : 43813,
      "srcfiles" : [
         "texmf-dist/source/latex/base/alltt.dtx",
         ....
         "texmf-dist/source/latex/base/utf8ienc.dtx"
      ],
      "category" : "Package",
      "cataloguedata" : {
         "version" : "2017/01/01 PL1",
         "topics" : "format",
         "license" : "lppl1.3",
         "date" : "2017-01-25 23:33:57 +0100"
      },
      "srccontainerchecksum" : "1d145b567cf48d6ee71582a1f329fe5cf002d6259269a71d2e4a69e6e6bd65abeb92461d31d7137f3803503534282bc0c5546e5d2d1aa2604e896e607c53b041",
      "postactions" : [],
      "binsize" : {},
      "longdesc" : "LaTeX is a widely-used macro package for TeX, [...]",
      "srccontainersize" : 516036,
      "containerchecksum" : "af0ac85f89b7620eb7699c8bca6348f8913352c473af1056b7a90f28567d3f3e21d60be1f44e056107766b1dce8d87d367e7f8a82f777d565a2d4597feb24558",
      "executes" : [],
      "binfiles" : {},
      "name" : "latex",
      "catalogue" : null,
      "docsize" : 3799,
      "available" : true,
      "runfiles" : [
         "texmf-dist/makeindex/latex/gglo.ist",
         ...
         "texmf-dist/tex/latex/base/x2enc.dfu"
      ],
      "shortdesc" : "A TeX macro package that defines LaTeX"
   }
]

What is currently not available via tlmgr info and thus also not via the JSON output is access to virtual TeX Live databases with several member databases (multiple repositories). I am thinking about how to incorporate this information.

These changes are currently available in the tlcritical repository, but will enter proper TeX Live repositories soon.

Using this JSON output I will rewrite the current TLCockpit tlmgr interface to display more complete information.

15 October, 2017 01:32AM by Norbert Preining

October 14, 2017

Lior Kaplan

Debian Installer git repository

While dealing with d-i’s translation last month in FOSScamp, I was kinda surprised it’s still on SVN. While reviewing PO files from others, I couldn’t select specific parts to commit.

Debian does have a git server, and many DDs (Debian Developers) use it for their Debian work, but it’s not as public as I wish it to be. Meaning I lack the pull / merge request abilities as well as the review process.

Recently I got a reminder that the D-I’s Hebrew translation needs some love. I asked my local community for help. Receiving a PO file by mail, reminded me of the SVN annoyance. So this time I decided to convert it to git and ask people to send me pull requests. Another benefit would be making the process more transparent as others could see these PRs (and hopefully comment if needed).

For this experiment, I opened a repository on GitHub at https://github.com/kaplanlior/debian-installer I know they aren’t open source as GitLab, but they are a popular choice which is a good start for my experiment. If and when it succeeds, we can discuss the platform.

debian-9

Debian 9

(featured image by Jonathan Carter)

 


Filed under: Debian GNU/Linux

14 October, 2017 10:15PM by Kaplan

Petter Reinholdtsen

A one-way wall on the border?

I find it fascinating how many of the people being locked inside the proposed border wall between USA and Mexico support the idea. The proposal to keep Mexicans out reminds me of the propaganda twist from the East Germany government calling the wall the “Antifascist Bulwark” after erecting the Berlin Wall, claiming that the wall was erected to keep enemies from creeping into East Germany, while it was obvious to the people locked inside it that it was erected to keep the people from escaping.

Do the people in USA supporting this wall really believe it is a one way wall, only keeping people on the outside from getting in, while not keeping people in the inside from getting out?

14 October, 2017 08:10PM

October 13, 2017

hackergotchi for Alex Muntada

Alex Muntada

My Free Software Activities in Jul-Sep 2017

If you read Planet Debian often, you’ve probably noticed a trend of Free Software activity reports at the beginning of the month. First, those reports seemed a bit unamusing and lengthy, but since I take the time to read them I’ve learnt a lot of things, and now I’m amazed at the amount of work that people are doing for Free Software. Indeed, I knew already that many people are doing lots of work. But reading those reports gives you an actual view of how much it is.

Then, I decided that I should do the same and write some kind of report since I became a Debian Developer in July. I think it’s a nice way to share your work with others and maybe inspire them as it happened to me. So I asked some of the people that have been inspiring me how do they do it. I mean, I was curious to know how they keep track of the work they do and how long it takes to write their reports. It seems that it takes quite some time, it’s mostly manual work and usually starts by the end of the month, reviewing their contributions in mailing lists, bug trackers, e-mail folders, etc.

Here I am now, writing my first report about my Free Software activities since July and until September 2017. I hope you like it:

  • Filed bug #867068 in nm.debian.org: Cannot claim account after former SSO alioth cert expired.
  • Replied a request in private mail for becoming the maintainer for the Monero Wallet, that I declined suggesting to file an RFP.
  • Attended DebConf17 DebCamp but I missed most of Open Day and the rest of the Debian conference in Montreal.
  • Rebuilt libdbd-oracle-perl after being removed from testing to enable the transition to perl 5.26.
  • Filed bug #870872 in tracker.debian.org: Server Error (500) when using a new SSO cert.
  • Filed bug #870876 in tracker.debian.org: make subscription easier to upstreams with many packages.
  • Filed bug #871767 in lintian: [checks/cruft] use substr instead of substring in example.
  • Filed bug #871769 in reportbug: man page mentions -a instead of -A.
  • Suggested to remove libmail-sender-perl in bug #790727, since it’s been deprecated upstream.
  • Mentioned -n option for dpt-takeover in how to adopt pkg-perl manual.
  • Fixed a broken link to HCL in https://wiki.debian.org/Hardware.
  • Adopted libapache-admin-config-perl into pkg-perl team, upgraded to 0.95-1 and closed bug #615457.
  • Fixed bug #875835 in libflickr-api-perl: don’t add quote marks in SYNOPSIS.
  • Removed 50 inactive accounts from pkg-perl team in alioth as part of our annual membership ping.

Happy hacking!

 


13 October, 2017 06:47PM by Alex Muntada

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Qt 4 and 5 and OpenSSL1.0 removal

Today we received updates on the OpenSSL 1.0 removal status:

<https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=828522#206>
<https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=859671#19>

So those removal bugs' severities will be raised to RC in aproximately a month.

We still don't have any solutions for Qt 4 or 5.

For the Qt 5 case we will probably keep the bug open until Qt 5.10 is in the archive which should bring OpenSSL 1.1 support *or* FTP masters decide to remove OpenSSL1.0. In this last case the fate will be the same as with Qt4, below.

For Qt4 we do not have patches available and there will probably be none in time (remember we do not have upstream support). That plus the fact that we are actively trying to remove it from the archive it means we will remove openssl support. This might mean that apps using Qt4:

- Might cease to work.
- Might keep working:
  - Informing their users that no SSL support is available → programmer did a good job.
  - Not informing their users that no SSL support is available and establishing connections non the less → programmer might have not done a good job.

Trying to inform users as soon as possible,

Lisandro for the Qt/KDE team.

13 October, 2017 02:29PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.17

Weblate 2.17 has been released today. There are quite some performance improvements, improved search, improved access control settings and various other improvements.

Full list of changes:

  • Weblate by default does shallow Git clones now.
  • Improved performance when updating large translation files.
  • Added support for blocking certain emails from registration.
  • Users can now delete their own comments.
  • Added preview step to search and replace feature.
  • Client side persistence of settings in search and upload forms.
  • Extended search capabilities.
  • More fine grained per project ACL configuration.
  • Default value of BASE_DIR has been changed.
  • Added two step account removal to prevent accidental removal.
  • Project access control settings is now editable.
  • Added optional spam protection for suggestions using Akismet.

Update: The bugfix 2.17.1 is out as well, fixing testsuite errors in some setups:

  • Fixed running testsuite in some specific situations.
  • Locales updates.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

13 October, 2017 01:00PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

I need to speak up now X – Economics

Dear all,

This would be a longish blog post (as most of mine are) compiled over days but as there is so short a time and so much to share.

I had previously thought to share beautiful photographs of Ganesh mandals taking out the procession at time of immersion of the idol or the last day of Durga Puja recent events around do not make my mood to share photos at this point in time. I may share some of them in a future blog post or two .

Before going further, I would like to offer my sympathies and condolences to people hurt and dislocated in Hurricane Irma , the 2017 Central Mexico Earthquake and lastly the most recent Las Vegas shooting as well as Hurricane Maria in Puerto Rico . I am somewhat nonplussed as to why Americans always want to name, especially hurricanes which destroy people’s lives and livelihood built over generations and why most of the hurricanes are named after women. A look at weather.com site unveiled the answer to the mystery.

Ironically (or not) I saw some of the best science coverage about Earthquakes or anything scientific reporting and analysis after a long time in mainstream newspapers in India.

On another note, I don’t understand or even expect to understand why the gunman did what he did 2 days back. Country music AFAIK is one of the most chilled-out kind of music, in some ways very similar to classical Indian singing although they are worlds apart in style of singing, renditions, artists, the way they emote etc. I seriously wish that the gunman had not been shot but caught and reasons were sought about what he did, he did. While this is certainly armchair thinking as was not at the scene of crime, but if a Mumbai Police constable could do it around a decade ago armed only with a lathi could do it, why couldn’t the American cops who probably are trained in innumerable ways to subdue people without killing them, did. While investigations are on, I suspect if he were caught just like Ajmal Kasab was caught then lot of revelations might have come up. From what is known, the gentleman was upwardly mobile i.e. he was white, rich and apparently had no reason to have beef with anybody especially a crowd swaying to some nice music, all of which makes absolutely no sense.

Indian Economy ‘Slowdown’

Anyways, back to one of the main reasons of writing this blog post. Few days back, an ex-finance Minister of India Yashwant Sinha wrote what was felt by probably millions of Indians, an Indian Express article called ‘I need to speak up now

While there have been many, many arguments made since then by various people. A simple search of ‘I need to speak up’ would lead to lead to many a result besides the one I have shared above. The only exception I have with the article is the line “Forty leading companies of the country are already facing bankruptcy proceedings. Many more are likely to follow suit.” I would not bore you but you ask any entrepreneur trying to set up shop in India i.e. ones who actually go through the processes of getting all the licenses for setting up even a small businesses as to the numerous hurdles they have to overcome and laid-back corrupt bureaucracy which they have to overcome. I could have interviewed some of my friends who had the conviction and the courage to set up shop and spent more than half a decade getting all the necessary licenses and approval to set up but it probably would be too specific for one industry or the other and would lead to the same result.

Co-incidentally, a new restaurant, leaf opened in my vicinity few weeks before. From the looks it looked like a high-brow, high-priced restaurant hence like many others I did not venture in. After a few days, they introduced south-Indian delicacies like Masala Dosa, Uttapam at prices similar to other restaurants around. So I ventured in and bought some south Indian food to consume between mum and me.

Few days later, I became friends with the owner/franchisee and I suggested (in a friendly tone) that why he doesn’t make it like a CCD play where many people including yours truly use the service to share, strategize and meet with clients.

The CCD joints usually serve coffee and snacks (which are over-priced but still run out pretty fast) but people come as they have chilled-out atmosphere and Wi-Fi access which people need for their smartphones, although the Wi-Fi part may soon become redundant With Reliance Jio making a big play.

I also shared why he doesn’t add more variety and time (the south Indian items are time-limited) as I see/saw many empty chairs there.

Anyways, the shop-owner/franchisee shared his gross costs including salary, stocking, electricity, rent and it doesn’t pan out to be serving Rs.80/- dish (roughly a 1US dollar and 25 cents) then serving INR Rs. 400/- a dish (around 6 $USD). One round of INR 400/- + dishes make his costs for the day, around 12 tables were there. It’s when they have two full rounds of dishes costing INR 400/- or more that he actually has profits and he is predicting loss for at least 6 months to a year before he makes a rebound. He needs steady customers rather than just walk-ins that will make his business work/click. Currently his family is bearing the costs. He didn’t mention the taxes although I know apart from GST there are still some local body taxes that they will have to pay and comply with.

There are a multitude of problems for shutting a shop legally as well as they have to again renavigate the bureaucracy for the same. I have seen more than a few retailers downing their shutters for 6-8 months and then either sell it to new management, let go of the lease or simply sell the property to a competitor. The Insolvency and Bankruptcy Code is probably the first proper exit policy for large companies. So the 40 odd companies that Mr. Sinha were talking about were probably sick for a long time.

In India, there is also an additional shame of being a failed entrepreneur unlike in the west where Entrepreneurs start on their next venture. As seen from Retailing In India only 3.3% of the population or at the most 4% of the population is directly or indirectly linked with the retail trade. Most of the economy still derives its wealth from the agrarian sector which is still reeling under the pressure from demonetization which happened last year. Al jazeera surprisingly portrayed a truer picture of the effects demonetization had on common citizen than many Indian newspapers did at the time. Because of the South African Debconf, I had to resort to debit cards and hence was able to escape standing in long lines in which many an old and women perished.

It is only yesterday that the Government has acknowledged which many prominent Indians have been saying for months now, that we are in a ‘slowdown‘. Be aware of the terms being used for effect by the Prime Minister. There are two articles which outlines the troubles India is in atm. The only bright spot has been e-commerce which so far has eluded GST although the Govt. has claimed regulations to put it in check.

Indian Education System

Interestingly, Ravish Kumar has started a series on NDTV where he is showcasing how Indian education sector, especially public colleges have been left to teachers on contract basis, see the first four episodes on NDTV channel starting with the first one I have shared as a hyperlink. I apologize as the series is in Hindi as the channel is meant for Indians and is mostly limited to Northern areas of the Country (mostly) although he has been honest that it is because they lack resources to tackle the amount of information flowing to them. Ravish started the series with sharing information about the U.S. where the things are similar with some teachers needing to sleep in cars because of high-cost of living to some needing to turn to sex-work . I was shocked when I read the guardian article, that is no way to treat our teachers.I went on to read ‘How the American University was Killed‘ following the breadcrumbs along the way. Reading that it seems Indians have been following the American system playbook from the 1980’s itself. The article talks about HMO as well and that seems to have followed here as well with my own experience of hospital fees and drugs which I had to entail a few weeks/month ago.

Few years ago, when me and some of my friends had the teaching bug and we started teaching in a nearby municipal school, couple of teachers had shared that they were doing 2-3 jobs to make ends meet. I don’t know about others in my group, at least I was cynical because I thought all the teachers were permanent and they make good money only to realize now that the person was probably speaking the truth. When you have to do three jobs to make ends meet from where do you bring the passion to teach young people and that too outside the syllabus ?

Also, with this new knowledge in hindsight, I take back all my comments I made last year and the year before for the pathetic education being put up by the State. With teachers being paid pathetically/underpaid and almost 60% teachers being ad-hoc/adjunct teachers they have to find ways to have some sense of security. Most teachers are bachelors as they are poor and cannot offer any security (either male or female) and for women, after marriage it actually makes no sense for them to continue in this profession. I salute all the professors who are ad-hoc in nature and probably will never get a permanent position in their life.

I think in some way, thanx to him, that the government has chosen to give 7th pay commisson salary to teachers. While the numbers may appear be large, there are a lot of questions as to how many people will actually get paid. There needs to be lot of vacancies which need to be filled quickly but don’t see any solution in the next 2-3 years as well. The Government has taken a position to use/re-hire retired teachers rather than have new young teachers as an unwritten policy. In this Digital India context how are retired teachers supposed to understand and then pass on digital concepts is beyond me when at few teacher trainings I have seen they lack even the most basic knowledge that I learnt at least a decade or two ago, the difference is that vast. I just don’t know what to say to that. My own experience with my own mother who had pretty good education in her time and probably would have made a fine business-woman if she knew that she will have a child that she would have to raise by herself alone (along with maternal grand-parents) is testimonial to the fact how hard it is for older people to grasp technology and here I’m talking just using the interface as a consumer rather than a producer or someone in-between who has the idea of how companies and governments profit from whatever data is shared one way or the other.

After watching the series/episodes and discussing the issue with my mother it was revealed that both her and my late maternal grandfather were on casual/ad-hoc basis till 20-25 years in their service in the defense sector. If Ravish were to do a series on the defense sector he probably would find the same thing there. To add to that, the defense sector is a vital component to a country’s security. If 60% of the defense staff in all defense establishments have temporary staff how do you ensure the loyalty of the people working therein. That brings to my mind ‘Ignorance is bliss’.

Software development and deployment

There is another worry that all are skirting around, the present dispensation/government’s mantra is ‘minimum government-maximum governance’ with digital technologies having all solutions which is leading to massive unemployment. Also from most of the stories/incidents I read in the newspapers, mainstream media and elsewhere it seems most software deployments done in India are done without having any system of internal checks and balances. There is no ‘lintian‘ for software to be implemented. Contracts seem to be given to big companies and there is no mention of what prerequisites or conditions were laid down by the Government for software development and deployment and if any checks were done to ensure that the software being developed was in according to government specifications or not. Ideally this should all be in public domain so that questions can be asked and responsibility fixed if things go haywire, as currently they do not.

Software issues

As my health been not that great, I have been taking a bit more time and depth while filing bugs. #877638 is a good example. I suspect though that part of the problem might be that mate has moved to gtk3 while guake still has gtk-2 bindings. I also reported the issue upstream both in mate-panel as well as guake . I haven’t received any response from either or/and upstreams .

I also have been fiddling around with gdb to better understand the tool so I can exploit/use this tool in a better way. There are some commands within the gdb interface which seem to be interesting and hopefully I’ll try how the commands perform over days, weeks to a month. I hope we see more action on the mate-panel/guake bug as well as move of guake to gtk+3 but that what seemingly seemed like wait for eternity seems to have done by somebody in last couple of days. As shared in the ticket there are lots of things still to do but it seems the heavy lifting has been done but seems merging will be tricky as two developers have been trying to update to gtk+3 although aichingm seems to have a leg up with his 3! branch.

Another interesting thing I saw is the below picture.

Firefox is out of date on wordpress.com

The firefox version I was using to test the site/wordpress-wp-admin was Mozilla Firefox 52.4.0 which AFAIK is a pretty recentish one and people using Debian stretch would probably be using the same version (firefox stable/LTS) rather than the more recent versions. I went to the link it linked to and it gave no indication as to why it thought my browser is out-of-date and what functionality was/is missing. I have found that wordpress support has declined quite a bit and people don’t seem to use the forums as much as they used to before.

I also filed a few bugs for qalculate. #877716 where a supposedly transitional package removes the actual application, #877717 as the software has moved its repo. to github.com as well as tickets and other things in process and lastly #877733. I had been searching for a calculator which can do currency calculations on the fly (say for e.g. doing personal budgeting for Taiwan debconf) without needing to manually enter the conversion rates and losing something in the middle. While the current version has support for some limited currencies, the new versions promise more as other people probably have more diverse needs for currency conversions (people who do long or short on oil, stocks overseas is just one example, I am sure there are many others) than simplistic mine.


Filed under: Miscellenous Tagged: #American Education System, #bug-filing, #Climate change, #Dignity, #e-commerce, #gtk+3, #gtk2, #Indian Economy 'Slowdown', #Indian Education System, #Insolvency and Bankruptcy Code, #Las Vegas shooting, #Modern Retail in India, #planet-debian, #qalculate, Ad-hoc and Adjunct Professors, wordpress.com

13 October, 2017 11:58AM by shirishag75

hackergotchi for Michal Čihař

Michal Čihař

Using Trezor to store cryptocurencies

For quite some time I have some cryptocurrencies on hold. These mostly come from times it was possible to mine Bitcoin on the CPU, but I've got some small payments recently as well.

I've been using Electrum wallet so far. It worked quite well, but with increasing Bitcoin value, I was considering having some hardware wallet for that. There are few options which you can use, but I've always preferred Trezor as that device is made by guys I know. Also it's probably device with best support out of these (at least I've heard really bad stories about Ledger support).

In the end what decided is that they are also using Weblate to translate their user interface and offered me the wallet for free in exchange. This is price you can not beat :-). Anyway the setup was really smooth and I'm now fully set up. This also made me more open to accept other cryptocurrencies which are supported by Trezor, so you can now see more options on the Weblate donations page.

Filed under: Debian English SUSE Weblate

13 October, 2017 04:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

GitHub Streak: Round Four

Three years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld's secret to productivity: Just keep at it. Don't break the streak.

and showed the first chart of GitHub streaking

github activity october 2013 to october 2014

And two year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015

And a year ago we had a followup last year

github activity october 2015 to october 2016

And as it October 12 again, here is the new one:

github activity october 2016 to october 2017

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2017 02:45AM

October 12, 2017

hackergotchi for Joachim Breitner

Joachim Breitner

Isabelle functions: Always total, sometimes undefined

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians.

Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

HOL is a logic of total functions

If I have a Isabelle function f :: a ⇒ b between two types a and b (the function arrow in Isabelle is , not ), then – by definition of what it means to be a function in HOL – whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total.

In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like

spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))

Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type Bool.

Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq.

Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion:

Isabelle functions do not compute

We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it2. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind.

What is the difference?

  • A function a → b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b.
  • A function a ⇒ b in math (or Isabelle) associates with each value of type a a value of type b.

For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:

definition foo :: "(nat ⇒ real) ⇒ real" where
  "foo seq = (if convergent seq then lim seq else 0)"

This assigns a real number to every sequence, but it does not compute it in any useful sense.

From this it follows that

Isabelle functions are specified, not defined

Consider this function definition:

fun plus :: "nat ⇒ nat ⇒ nat"  where
   "plus 0       m = m"
 | "plus (Suc n) m = Suc (plus n m)"

To a functional programmer, this reads

plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.

which is clearly a description of a computation.

But to Isabelle, the above reads

plus is a binary function on natural numbers, and it satisfies the following two equations: …

And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus.

One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we’d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart).

Termination is a property of specifications, not functions

Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises before the function is defined: The fun command can only construct plus in a way that the equations hold if it passes a termination check – very much like Fixpoint in Coq.

But while the termination check of Fixpoint in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a “termination proof of the function” exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument!

For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument f on the second argument x until it finds a fixpoint. It is completely polymorphic (the single quote in 'a indicates that this is a type variable):

partial_function (tailrec)
  fixpoint :: "('a ⇒ 'a) ⇒ 'a ⇒ 'a"
where
  "fixpoint f x = (if f x = x then x else fixpoint f (f x))"

We can work with this definition just fine. For example, if we instantiate f with (λx. x-1), we can prove that it will always return 0:

lemma "fixpoint (λ n . n - 1) (n::nat) = 0"
  by (induction n) (auto simp add: fixpoint.simps)

Similarly, if we have a function that works within the option monad (i.e. |Maybe| in Haskell), its specification can always be turned into a function without an explicit termination proof – here one that calculates the Collatz sequence:

partial_function (option) collatz :: "nat ⇒ nat list option"
 where "collatz n =
        (if n = 1 then Some [n]
         else if even n
           then do { ns <- collatz (n div 2);    Some (n # ns) }
           else do { ns <- collatz (3 * n + 1);  Some (n # ns)})"

Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function “returns” a list only if the collatz sequence eventually reaches 1.

I expect these definitions to make a Coq user very uneasy. How can fixpoint be a total function? What is fixpoint (λn. n+1)? What if we run collatz n for a n where the Collatz sequence does not reach 1?3 We will come back to that question after a little detour…

HOL is a logic of non-empty types

Another big difference between Isabelle and Coq is that in Isabelle, every type is inhabited. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type.

Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions.

This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely

undefined :: 'a

The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as arbitrary or, even better, unknown.

Since undefined can be instantiated at any type, we can instantiate it for example at bool, and we can observe an important fact: undefined is not an extra value besides the “usual ones”. It is simply some value of that type, which is demonstrated in the following lemma:

lemma "undefined = True ∨ undefined = False" by auto

In fact, if the type has only one value (such as the unit type), then we know the value of undefined for sure:

lemma "undefined = ()" by auto

It is very handy to be able to produce an expression of any type, as we will see as follows

Partial functions are just underspecified functions

For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle’s equivalent of Haskell’s partial fromJust function:

fun fromSome :: "'a option ⇒ 'a" where
  "fromSome (Some x) = x"

This definition is accepted by fun (albeit with a warning), and the generated function fromSome behaves exactly as specified: when applied to Some x, it is x. The term fromSome None is also a value of type 'a, we just do not know which one it is, as the specification does not address that.

So fromSome None behaves just like undefined above, i.e. we can prove

lemma "fromSome None = False ∨ fromSome None = True" by auto

Here is a small exercise for you: Can you come up with an explanation for the following lemma:

fun constOrId :: "bool ⇒ bool" where
  "constOrId True = True"

lemma "constOrId = (λ_.True) ∨ constOrId = (λx. x)"
  by (metis (full_types) constOrId.simps)

Overall, this behavior makes sense if we remember that function “definitions” in Isabelle are not really definitions, but rather specifications. And a partial function “definition” is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

Nonterminating functions are also just underspecified

Let us return to the puzzle posed by fixpoint above. Clearly, the function – seen as a functional program – is not total: When passed the argument (λn. n + 1) or (λb. ¬b) it will loop forever trying to find a fixed point.

But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when f has no fixed-point? It states that the equation fixpoint f x = fixpoint f (f x) holds. And this equation has a solution, for example fixpoint f _ = undefined.

Or more concretely: The specification of the fixpoint function states that fixpoint (λb. ¬b) True = fixpoint (λb. ¬b) False has to hold, but it does not specify which particular value (True or False) it should denote – any is fine.

Not all function specifications are ok

At this point you might wonder: Can I just specify any equations for a function f and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function bogus :: () ⇒ nat with the equation bogus () = Suc (bogus ()), because this equation does not have a solution.

We can actually prove that such a function cannot exist:

lemma no_bogus: "∄ bogus. bogus () = Suc (bogus ())" by simp

(Of course, not_bogus () = not_bogus () is just fine…)

You cannot reason about partiality in Isabelle

We have seen that there are many ways to define functions that one might consider “partial”. Given a function, can we prove that it is not “partial” in that sense?

Unfortunately, but unavoidably, no: Since undefined is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that “A function result is not specified”.

Here is an example that demonstrates this: Two “partial” functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:

fun partial1 :: "bool ⇒ unit" where
  "partial1 True = ()"
partial_function (tailrec) partial2 :: "bool ⇒ unit" where
  "partial2 b = partial2 b"
fun total :: "bool ⇒ unit" where
  "total True = ()"
| "total False = ()"

lemma "partial1 = total ∧ partial2 = total" by auto

If you really do want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain continuous functions between domains. In that setting, ⊥ ≠ () and partial2 = ⊥ ≠ total. We have done that to verify some of HLint’s equations.

You can still compute with Isabelle functions

I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to `fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to “compute” that function quite like you would in Coq or Haskell.

Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle.

While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones.

Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a “moral reasoning” foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like fixpoint, which is not possible in Coq.

There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

Conclusion

We have seen how in Isabelle, every function is total. Function declarations have equations, but these do not define the function in an computational sense, but rather specify them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

PS: Axiom undefined in Coq

This section is speculative, and an invitation for discussion.

Coq already distinguishes between types used in programs (Set) and types used in proofs Prop.

Could Coq ensure that every t : Set is non-empty? I imagine this would require additional checks in the Inductive command, similar to the checks that the Isabelle command datatype has to perform4, and it would disallow Empty_set.

If so, then it would be sound to add the following axiom

Axiom undefined : forall (a : Set), a.

wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality.

With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case _ ⇒ undefined. Would it “help” with non-obviously terminating functions? Would it allow a Coq command Tailrecursive that accepts any tailrecursive function without a termination check?


  1. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.

  2. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.

  3. Let me know if you find such an n. Besides n = 0.

  4. Like fun, the constructions by datatype are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.

12 October, 2017 05:54PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.8.100.1.0

armadillo image

We are thrilled to announce a new big RcppArmadillo release! Conrad recently moved Armadillo to the 8.* series, with significant improvements and speed ups for sparse matrix operations, and more. See below for a brief summary.

This also required some changes at our end which Binxiang Ni provided, and Serguei Sokol improved some instantiations. We now show the new vignette Binxiang Ni wrote for his GSoC contribution, and I converted it (and the other main vignette) to using the pinp package for sleeker pdf vignettes.

This release resumes our bi-monthly CRAN release cycle. I may make interim updates available at GitHub "as needed". And this time I managed to mess up the reverse depends testing, and missed one sync() call on the way back to R---but all that is now taken care of.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.100.1.0 (2017-10-05)

  • Upgraded to Armadillo release 8.100.1 (Feral Pursuits)

    • faster incremental construction of sparse matrices via element access operators

    • faster diagonal views in sparse matrices

    • expanded SpMat to save/load sparse matrices in coord format

    • expanded .save(),.load() to allow specification of datasets within HDF5 files

    • added affmul() to simplify application of affine transformations

    • warnings and errors are now printed by default to the std::cerr stream

    • added set_cerr_stream() and get_cerr_stream() to replace set_stream_err1(), set_stream_err2(), get_stream_err1(), get_stream_err2()

    • new configuration options ARMA_COUT_STREAM and ARMA_CERR_STREAM

  • Constructors for sparse matrices of types dgt, dtt amd dst now use Armadillo code for improved performance (Serguei Sokol in #175 addressing #173)

  • Sparse matrices call .sync() before accessing internal arrays (Binxiang Ni in #171)

  • The sparse matrix vignette has been converted to Rmarkdown using the pinp package, and is now correctly indexed. (#176)

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 October, 2017 02:13AM

October 11, 2017

hackergotchi for Steve Kemp

Steve Kemp

A busy week or two

It feels like the past week or two has been very busy, and so I'm looking forward to my "holiday" next month.

I'm not really having a holiday of course, my wife is slowly returning to work, so I'll be taking a month of paternity leave, taking sole care of Oiva for the month of November. He's still a little angel, and now that he's reached 10 months old he's starting to get much more mobile - he's on the verge of walking, but not quite there yet. Mostly that means he wants you to hold his hands so that he can stand up, swaying back and forth before the inevitable collapse.

Beyond spending most of my evenings taking care of him, from the moment I return from work to his bedtime (around 7:30PM), I've made the Debian Administration website both read-only and much simpler. In the past that site was powered by a lot of servers, I think around 11. Now it has only a small number of machines, which should slowly decrease.

I've ripped out the database host, the redis host, the events-server, the planet-machine, the email-box, etc. Now we have a much simpler setup:

  • Front-end machine
    • Directly serves the code site
    • Directly serves the SSL site which exists solely for Let's Encrypt
    • Runs HAProxy to route the rest of the requests to the cluster.
  • 4 x Apache servers
    • Each one has a (read-only) MySQL database on it for the content.
      • In case of future-compromise I removed all user passwords, and scrambled the email-addresses.
      • I don't think there's a huge risk, but better safe than sorry.
    • Each one runs the web-application.
      • Which now caches each generated page to /tmp/x/x/x/x/$hash if it doesn't exist.
      • If the request is cached it is served from that cache rather than dynamically.

Finally although I'm slowly making progress with "radio stuff" I've knocked up a simple hack which uses an ultrasonic sensor to determine whether I'm sat in front of my (home) PC. If I am everything is good. If I'm absent the music is stopped and the screen locked. Kinda neat.

(Simple ESP8266 device wired to the sensor. When the state changes a message is posted to Mosquitto, where a listener reacts to the change(s).)

Oh, not final. I've also transfered my mobile phone from DNA.fi to MoiMobile. Which should complete soon, right now my phone is in limbo, active on niether service. Oops.

11 October, 2017 09:00PM

hackergotchi for Michal Čihař

Michal Čihař

New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long, so it's time to process it and include new project.

This time, the newly hosted projects include:

  • Hunspell - famous spell checker
  • Eolie - a web browser for GNOME
  • SkyTube - an open-source YouTube app for Android
  • Eventum - issue tracking system

Additionally there were some notable additions to existing projects:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

11 October, 2017 04:00PM

October 10, 2017

Carl Chenet

The Slack Threat

During a long era, electronic mail was the main communication tool for enterprises. Slack, which offer public or private group discussion boards and instant messaging between two people, challenge its position, especially in the IT industry.

Not only Slack has features known and used since IRC launch in the late ’80s, but Slack also offers file sending and sharing, code quoting, and it indexing for ulterior searches everything that goes through the application. Slack is also modular with numerous plug-in to easily add new features.

Using the Software-As-A-Service (SAAS) model, Slack basic version is free, and users pay for options. Slack is now considered by the Github generation like the new main enterprise communication tool.

As I did in my previous article on the Github threat, this one won’t promote Slask’s advantages, as many other articles have already covered all these points ad nauseam, but to show the other side and to warn the companies using this service about its inherent risks. So far, these risks have been ignored, sometimes voluntary in the name of the “It works™” ideology. Neglecting all economic and safety consideration, neglecting all threat to privacy and individual freedom. We’ll see about them below.

Github, a software forge as a SAAS, with all the advantage but also all the risk of its economic model

All your company communication since its creation

When a start-up chooses Slack, all of its internal communication will be stored by Slack. When someone uses this service, the simple fact to chat through it means that the whole communication is archived.

One may point that within the basic Slack offer, only the last 10.000 messages can be read and searched. Bad argument. Slack stored every message and every file shared as it pleases. We’ll see below this application behavior is of capital importance in the Slack threat to enterprises.

And the problem is the same for all other companies which choose Slack at one point or another. If they replace their traditional communication method with it, Slack will have access to capital data, not only in volume, but also because of their value for the company itself… Or anyone interested in this company life.

Search Your Entire Archive

One of the main arguments to use Slack is its “Search your entire archive” feature. One can search almost anything one can think of. Why? Because everything is indexed. Your team chat archive or the more or less confidential documents exchanged with the accountant department; everything is in it in order to provide the most effective search tool.

The search bar, well-known by Slack users

We can’t deny it’s a very attractive feature for everyone inside the company. But it is also a very attractive feature for everyone outside of the company who would want to know more about its internal life. Even more if you’re looking for a specific subject.

If Slack is the main communication tool of your company, and if as I’ve experienced in my professional life, some teams prefer to use it than to go to the office next door or even bug you to put the information on the dedicated channel, one can easily deduce that nothing—in this type of company—escape Slack. The automatic indexation and the search feature efficiency are excellent tools to get all the information needed, in quantity and in quality.

As such, it’s a great social engineering tool for everyone who has access to it, with a history as old as the use of Slack as a communication tool in the company.

Across borders… And Beyond!

Slack is a Web service which uses mainly Amazon Web services and most specially Cloudfront, as stated by the available information on Slack infrastructure.

Even without a complete study of said infrastructure, it’s easy to state that all the data regarding many innovative global companies around the world (and some of them including for all their internal communication since their creation) are located in the United States, or at least in the hands of a US company, which must follow US laws, a country with a well-known history of large scale industrial espionage, as the whistleblower Edward Snowden demonstrated it in 2013 and where company data access has no restriction under the Patriot Act, as in the Microsoft case (2014) where data stored in Ireland by the Redmond software editor have been given to US authorities.

Edward Snowden, an individual—and corporate—freedom fighter

As such, Slack’s automatic indexation and search tool are a boon for anyone—spy agency or hacker—which get authorized access to it.

To trust a third party with all, or at least most of, your internal corporate communication is a certain risk for your company if the said third party doesn’t follow the same regulations as yours or if it has different interests, from a data security point of view or more globally on its competitiveness. A badly timed data leak can be catastrophic.

What’s the point of secretly preparing a new product launch or an aggressive takeover if all your recent Slack conversations have leaked, including your secret plans?

What if… Slack is hacked?

First let’s remember that even if a cyber attack may appear as a rare or hypothetical scenario to a badly informed and hurried manager, it is far from being as rare as she or he believes it (or wants to believe it).

Infrastructure hacking is quite common, as a regular visit to Hacker News will give you multiple evidence. And Slack itself has already been hacked.

February 2015: Slack is the victim during four days of a cyber attack, which was made public by the company in March. Officially, the unauthorized access was limited to information on the users’ profiles. It is impossible to measure exactly what and who was impacted by this attack. In a recent announcement, Yahoo confessed that these 3 billion accounts (you’ve read well: 3 billions) were compromised … late 2014!

Yahoo, the company which suffered the largest recorded cyberattack regarding the compromised account numbers

Officially, Slack stated that “No financial or payment information was accessed or compromised in this attack.” Which is, and by far, the least interesting of all data stored within Slack! With company internal communication indexed—sometimes from the very beginning of said company—and searchable, Slack may be a potential target for cybercriminal not looking for its users’ financial credentials but more their internal data already in a usable format. One can imagine Slack must give information on a massive data leak, which can’t be ignored. But what would happen if only one Slack user is the victim of said leak?

The Free Alternative Solutions

As we demonstrated above, companies need to find an alternative solution to Slack, one they can host themselves to reduce data leaks and industrial espionage and dependency on the Internet connection. Luckily, Slack success created its own copycats, some of them being also free software.

Rocket.chat is one of them. Its comprehensive service offers chat rooms, direct messages and file sharing but also videoconferencing and screen sharing, and even most features. Check their dedicated page. You can also try an online demo. And even more, Rocket Chat has a very simple extension system and an API.

Mattermost is another service which has the advantages of proximity and of compatibility with Slack. It offers numerous features including the main expected by this type of software. It also offers numerous apps and plug-ins to interact with online services, software forges, and continuous integration tools.

It works

In the introduction, we discussed the “It works™” effect, usually invoked to dispel any arguments about data protection and exchange confidentiality we discussed in this article. True, one single developer can ask: why worry about it? All I want is to chat with my colleagues and send files!

Because Slack service subscription in the long term put the company continuously at risk. Maybe it’s not the employees’ place to worry about it, they just have to do their job the more efficiently possible. On the other side, the company management, usually non-technical, may not be aware of what risks will threaten their company with this technical choice. The technical management may pretend to be omniscient, nobody is fooled.

Either someone from the direction will ask the right question (where are our data and who can access them?) or someone from the technical side alert them officially on these problems. This is this technical audience, even if not always heard by their direction, which is the target of this article. May they find in it the right arguments to be convincing.

We hope that the several points we developed in this article will help you to make the right choice.

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in October 2016.

 

10 October, 2017 10:00PM by Carl Chenet

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

OpenPGP smartcard transition (part 1)

A long time ago, I switched my GnuPG setup to a smartcard based one. I kept using the same master key, but:

  • copied the rsa4096 master key to a “master” smartcard, for when I need to sign (certify) other keys;
  • created rsa2048 subkeys (for signature, encryption and authentication) and moved them to an OpenPGP smartcard for daily usage.

I've been working with that setup for a few years now and it is working perfectly fine. The signature counter on the OpenPGP basic card is a bit north of 5000 which is large but not that huge, all considered (and not counting authentication and decryption key usage).

One very nice feature of using a smartcard is that my laptop (or other machines I work on) never manipulates the private key directly but only sends request to the card, which is a really huge improvement, in my opinion. But it's also not the perfect solution for me: the OpenPGP card uses a proprietary platform from ZeitControl, named BasicCard. We have very few information on the smartcard, besides the fact that Werner Koch trust ZeistControl to not mess up. One caveat for me is that the card does not use a certified secure microcontroler like you would find in smartcard chips found in debit card or electronic IDs. That means it's not really been audited by a competent hardware lab, and thus can't be considered secure against physical attacks. The cardOS software and the application implementing the OpenPGP specification are not public either and have not been audited either, to the best of my knowledge.

At one point I was interested in the Yubikey Neo, especially since the architecture Yubico used was common: a (supposedly) certified platform (secure microcontroler, card OS) and a GlobalPlatform / JavaCard virtual machine. The applet used in the Yubikey Neo is open-source, too, so you could take a look at it and identify any issue.

Unfortunately, Yubico transitioned to a less common and more proprietary infrastructure for Yubikey 4: it's not longer Javacard based, and they don't provide the applet source anymore. This was not really seen as a good move by a lot of people, including Konstantin Ryabitsev (kernel.org administrator). Also, it wasn't possible  even for the Yubico Neo to actually build the applet yourself and inject it on the card: when the Yubikey leaves the facility, the applet is already installed and the smartcard is locked (for obvious security reason). I've tried asking about getting naked/empty Yubikey with developers keys to load the applet myself, but it' was apparently not possible or would have required signing an NDA with NXP (the chip maker), which is not really possible as an individual (not that I really want to anyway).

In the meantime, a coworker actually wrote an OpenPGP javacard applet, with the intention to support latest version of the OpenPGP specification, and especially elliptic curve cryptography. The applet is called SmartPGP and has been released on ANSSI github repository. I investigated a bit, and found a smartcard with correct specification: certified (in France or Germany), and supporting Javacard 3.0.4 (required for ECC). The card can do RSA2048 (unfortunately not RSA4096) and EC with NIST (secp256r1, secp384r1, secp521r1) and Brainpool (P256, P384, P512) curves.

I've ordered some cards, and when they arrived started playing. I've built the SmartPGP applet and pushed it to a smartcard, then generated some keys and tried with GnuPG. I'm right now in the process of migrating to a new smartcard based on that setup, which seems to work just fine after few days.

Part two of this serie will describe how to build the applet and inject it in the smartcard. The process is already documented here and there, but there are few things not to forget, like how to lock the card after provisionning, so I guess having the complete process somewhere might be useful in case some people want to reproduce it.

10 October, 2017 08:44PM by Yves-Alexis (corsac@debian.org)

hackergotchi for Michal Čihař

Michal Čihař

Better access control in Weblate

Upcoming Weblate 2.17 will bring improved access control settings. Previously this could be controlled only by server admins, but now the project visibility and access presets can be configured.

This allows you to better tweak access control for your needs. There is additional choice of making the project public, but restricting translations, what has been requested by several projects.

You can see the possible choices on the UI screenshot:

Weblate overall experience

On Hosted Weblate this feature is currently available only to commercial hosting customers. Projects hosted for free are limited to public visibility only.

Filed under: Debian English SUSE Weblate

10 October, 2017 06:45PM

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Automatic Updates

We have instructions for setting up new Tor relays on Debian. The only time the word “upgrade” is mentioned here is:

Be sure to set your ContactInfo line so we can contact you if you need to upgrade or something goes wrong.

This isn’t great. We should have some decent instructions for keeping your relay up to date too. I’ve been compiling a set of documentation for enabling automatic updates on various Linux distributions, here’s a taste of what I have so far:


Debian

Make sure that unattended-upgrades is installed and then enable the installation of updates (as root):

apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Fedora 22 or later

Beginning with Fedora 22, you can enable automatic updates via:

dnf install dnf-automatic

In /etc/dnf/automatic.conf set:

apply_updates = yes

Now enable and start automatic updates via:

systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer

(Thanks to Enrico Zini I know all about these timer units in systemd now.)

RHEL or CentOS

For CentOS, RHEL, and older versions of Fedora, the yum-cron package is the preferred approach:

yum install yum-cron

In /etc/yum/yum-cron.conf set:

apply_updates = yes

Enable and start automatic updates via:

systemctl start yum-cron.service

I’d like to collect together instructions also for other distributions (and *BSD and Mac OS). Atlas knows which platform a relay is running on, so there could be a link in the future to some platform specific instructions on how to keep your relay up to date.

10 October, 2017 06:00PM

Jamie McClelland

Docker in Debian

It's not easy getting Docker to work in Debian.

It's not in stable at all:

0 jamie@turkey:~$ rmadison docker.io
docker.io  | 1.6.2~dfsg1-1~bpo8+1 | jessie-backports | source, amd64, armel, armhf, i386
docker.io  | 1.11.2~ds1-5         | unstable         | source, arm64
docker.io  | 1.11.2~ds1-5         | unstable-debug   | source
docker.io  | 1.11.2~ds1-6         | unstable         | source, armel, armhf, i386, ppc64el
docker.io  | 1.11.2~ds1-6         | unstable-debug   | source
docker.io  | 1.13.1~ds1-2         | unstable         | source, amd64
docker.io  | 1.13.1~ds1-2         | unstable-debug   | source
0 jamie@turkey:~$ 

And a problem with runc makes it really hard to get it working on Debian unstable.

These are the steps I took to get it running today (2017-10-10).

Remove runc (allow it to remove containerd and docker.io):

sudo apt-get remove runc

Install docker-runc (now in testing)

sudo apt-get install docker-runc

Fix containerd package to depend on docker-runc instead of runc:

mkdir containerd
cd containerd
apt-get download containerd 
ar x containerd_0.2.3+git20170126.85.aa8187d~ds1-2_amd64.deb
tar -xzf control.tar.gz
sed -i s/runc/docker-runc/g control
tar -c md5sums control | gzip -c > control.tar.gz
ar rcs new-containerd.deb debian-binary control.tar.gz data.tar.xz
sudo dpkg -i new-containerd.deb

Fix docker.io package to depend on docker-runc instead of runc.

mkdir docker
cd docker
apt-get download docker.io
ar x docker.io_1.13.1~ds1-2_amd64.deb
tar -xzf control.tar.gz
sed -i s/runc/docker-runc/g control
tar -c {post,pre}{inst,rm} md5sums control | gzip -c > control.tar.gz
ar rcs new-docker.io.deb debian-binary control.tar.gz data.tar.xz
sudo dpkg -i new-docker.io.deb

Symlink docker-runc => runc

sudo ln -s /usr/sbin/docker-runc /usr/sbin/runc

Keep apt-get from upgrading until this bug is fixed:

printf "# Remove when docker.io and containerd depend on docker-runc
# instead of normal runc
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877329
Package: runc 
Pin: release * 
Pin-Priority: -1 

Package: containderd 
Pin: release * 
Pin-Priority: -1 

Package: docker.io
Pin: release * 
Pin-Priority: -1" | sudo tee /etc/apt/preferences.d/docker.pref

Thanks to coderwall for tips on manipulating deb files.

10 October, 2017 04:07PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Debian and the GDPR

GDPR is a new EU regulation for privacy. The name is short for "General Data Protection Regulation" and it covers all organisations that handle personal data of EU citizens and EU residents. It will become enforceable May 25, 2018 (Towel Day). This will affect Debian. I think it's time for Debian to start working on compliance, mainly because the GDPR requires sensible things.

I'm not an expert on GDPR legislation, but here's my understanding of what we in Debian should do:

  • do a privacy impact assessment, to review and document what data we have, and collect, and what risks that has for the people whose personal data it is if the data leaks

  • only collect personal information for specific purposes, and only use the data for those purposes

  • get explicit consent from each person for all collection and use of their personal information; archive this consent (e.g., list subscription confirmations)

  • allow each person to get a copy of all the personal information we have about them, in a portable manner, and let them correct it if it's wrong

  • allow people to have their personal information erased

  • maybe appoint one or more data protection officers (not sure this is required for Debian)

There's more, but let's start with those.

I think Debian has at least the following systems that will need to be reviewed with regards to the GDPR:

  • db.debian.org - Debian project members, "Debian developers"
  • nm.debian.org
  • contributors.debian.org
  • lists.debian.org - at least membership lists, maybe archives
  • possibly irc servers and log files
  • mail server log files
  • web server log files
  • version control services and repositories

There may be more; these are just off the top of my head.

I expect that mostly Debian will be OK, but we can't just assume that.

10 October, 2017 03:11PM

Reproducible builds folks

Reproducible Builds: Weekly report #128

Here's what happened in the Reproducible Builds effort between Sunday October 1 and Saturday October 7 2017:

Media coverage

Documentation updates

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

32 package reviews have been added, 46 have been updated and 62 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (27)

diffoscope development

strip-nondeterminism development

Rob Browning noticed that strip-nondeterminism was causing serious performance regressions in the Clojure programming language within Debian. After some discussion, Chris Lamb also posted a query to debian-devel in case there were any other programming languages that might be suffering from the same problem.

reprotest development

Versions 0.7.1 and 0.7.2 were uploaded to unstable by Ximin Luo:

  • New features:
    • Add a --auto-build option to try to determine which specific variations cause unreproducibility.
    • Add a --source-pattern option to restrict copying of source_root, and set this automatically in our presets.
  • Usability improvements:
    • Improve error messages in some common scenarios.
      • Fiving a source_root or build_command that doesn't exist
      • Using reprotest with default settings after not installing Recommends
    • Output hashes after a successful --auto-build.
    • Print a warning message if we reproduced successfully but didn't vary everything.
  • Fix varying both umask and user_group at the same time.
  • Have dpkg-source extract to different build dir if varying the build-path.
  • Pass --exclude-directory-metadata to diffoscope(1) by default as this is the majority use-case.
  • Various bug fixes to get the basic dsc+schroot example working.

It included contributions already covered by posts of the previous weeks, as well as new ones from:

tests.reproducible-builds.org

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

10 October, 2017 08:08AM

October 09, 2017

Vincent Fourmond

Define a function with inline Ruby code in QSoas

QSoas can read and execute Ruby code directly, while reading command files, or even at the command prompt. For that, just write plain Ruby code inside a ruby...ruby end block. Probably the most useful possibility is to define elaborated functions directly from within QSoas, or, preferable, from within a script; this is an alternative to defining a function in a completely separated Ruby-only file using ruby-run. For instance, you can define a function for plain Michaelis-Menten kinetics with a file containing:

ruby
def my_func(x, vm, km)
  return vm/(1 + km/x)
end
ruby end

This defines the function my_func with three parameters, , (vm) and (km), with the formula:

You can then test that the function has been correctly defined running for instance:

QSoas> eval my_func(1.0,1.0,1.0)
 => 0.5
QSoas> eval my_func(1e4,1.0,1.0)
 => 0.999900009999

This yields the correct answer: the first command evaluates the function with x = 1.0, vm = 1.0 and km = 1.0. For , the result is (here 0.5). For , the result is almost . You can use the newly defined my_func in any place you would use any ruby code, such as in the optional argument to generate-buffer, or for arbitrary fits:

QSoas> generate-buffer 0 10 my_func(x,3.0,0.6)
QSoas> fit-arb my_func(x,vm,km)

To redefine my_func, just run the ruby code again with a new definition, such as:
ruby
def my_func(x, vm, km)
  return vm/(1 + km/x**2)
end
ruby end
The previous version is just erased, and all new uses of my_func will refer to your new definition.


See for yourself

The code for this example can be found there. Browse the qsoas-goodies github repository for more goodies !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1. You can download its source code or buy precompiled versions for MacOS and Windows there.

09 October, 2017 10:31PM by Vincent Fourmond (noreply@blogger.com)

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in September 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I sponsored a new release of hexalate for Unit193 and icebreaker for Andreas Gnau. The latter is a reintroduction.
  • New upstream releases this month: freeorion and hyperrogue.
  • I backported freeciv and freeorion to Stretch.

Debian Java

Debian LTS

This was my nineteenth month as a paid contributor and I have been paid to work 15,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 18. September to 24. September I was in charge of our LTS frontdesk. I triaged bugs in poppler, binutils, kannel, wordpress, libsndfile, libexif, nautilus, libstruts1.2-java, nvidia-graphics-drivers, p3scan, otrs2 and glassfish.
  • DLA-1108-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-1116-1. Issued a security update for poppler fixing 3 CVE.
  • DLA-1119-1. Issued a security update for otrs2 fixing 4 CVE.
  • DLA-1122-1. Issued a security update for asterisk fixing 1 CVE. I also investigated CVE-2017-14099 and CVE-2017-14603. I decided against a backport because the fix was too intrusive and the vulnerable option is disabled by default in Wheezy’s version which makes it a minor issue for most users.
  • I submitted a patch for Debian’s reportbug tool. (#878088) During our LTS BoF at DebConf 17 we came to the conclusion that we should implement a feature in reportbug that checks whether the bug reporter wants to report a regression for a recent security update. Usually the LTS and security teams  receive word from the maintainer or users who report issues directly to our mailing lists or IRC channels. However in some cases we were not informed about possible regressions and the new feature in reportbug shall ensure that we can respond faster to such reports.
  • I started to investigate the open security issues in wordpress and will complete the work in October.

Misc

  • I packaged a new version of xarchiver. Thanks to the work of Ingo Brückl xarchiver can handle almost all archive formats in Debian now.

QA upload

  • I did a QA upload of xball, an ancient game from the 90ies that simulates bouncing balls.  It should be ready for another decade at least.

Thanks for reading and see you next time.

09 October, 2017 10:18PM by Apo

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, September 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from August. I only worked 12 hours, so I will carry over 9 hours to the next month.

I prepared and released another update on the Linux 3.2 longterm stable branch (3.2.93). I then rebased the Debian linux package onto this version, added further security fixes, and uploaded it (DLA-1099-1).

09 October, 2017 04:25PM

Antonio Terceiro

pristine-tar updates

Introduction

pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team.

pristine-tar goals are to import the content of a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don’t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway.

The current state of the art, and perspectives for the future

pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while.

So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar.

Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn’t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic.

Recent improvements

Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can’t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit.

In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let’s look at the improvements made since pristine-tar 1.38, which was the version included in stretch.

pristine-tar 1.39: xdelta3 by default.

This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports.

The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release.

pristine-tar 1.40: improved compression heuristics

pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz:

In 1.40, the delta produced from the test tarball faad2_2.6.1.orig.tar.gz went down from 800KB, almost the same size of tarball itself, to 6.8KB

pristine-tar 1.41: support for signatures

This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb.

pristine-tar 1.42: optionally recompressing tarballs

I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the “pristine” bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine.

In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all:

util-linux_2.30.1.orig.tar.xz can only be imported after being recompressed

In other cases, if the current heuristics can’t produce a reasonably small delta, recompressing makes a huge difference. It’s the case for mumble_1.1.8.orig.tar.gz:

with recompression, the delta produced from mumble_1.1.8.orig.tar.gz goes from 1.2MB, or 99% of the size to the original tarball, to 14.6KB, 1% of the size of original tarball

Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations.

Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what “too large” means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.

09 October, 2017 03:06PM

Petter Reinholdtsen

Generating 3D prints in Debian using Cura and Slic3r(-prusa)

At my nearby maker space, Sonen, I heard the story that it was easier to generate gcode files for theyr 3D printers (Ultimake 2+) on Windows and MacOS X than Linux, because the software involved had to be manually compiled and set up on Linux while premade packages worked out of the box on Windows and MacOS X. I found this annoying, as the software involved, Cura, is free software and should be trivial to get up and running on Linux if someone took the time to package it for the relevant distributions. I even found a request for adding into Debian from 2013, which had seem some activity over the years but never resulted in the software showing up in Debian. So a few days ago I offered my help to try to improve the situation.

Now I am very happy to see that all the packages required by a working Cura in Debian are uploaded into Debian and waiting in the NEW queue for the ftpmasters to have a look. You can track the progress on the status page for the 3D printer team.

The uploaded packages are a bit behind upstream, and was uploaded now to get slots in the NEW queue while we work up updating the packages to the latest upstream version.

On a related note, two competitors for Cura, which I found harder to use and was unable to configure correctly for Ultimaker 2+ in the short time I spent on it, are already in Debian. If you are looking for 3D printer "slicers" and want something already available in Debian, check out slic3r and slic3r-prusa. The latter is a fork of the former.

09 October, 2017 08:50AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Achievement unlocked - Made with Creative Commons translated to Spanish! (Thanks, @xattack!)

I am very, very, very happy to report this — And I cannot believe we have achieved this so fast:

Back in June, I announced I'd start working on the translation of the Made with Creative Commons book into Spanish.

Over the following few weeks, I worked out the most viable infrastructure, gathered input and commitments for help from a couple of friends, submitted my project for inclusion in the Hosted Weblate translations site (and got it approved!)

Then, we quietly and slowly started working.

Then, as it usually happens in late August, early September... The rush of the semester caught me in full, and I left this translation project for later — For the next semester, perhaps...

Today, I received a mail that surprised me. That stunned me.

99% of translated strings! Of course, it does not look as neat as "100%" would, but there are several strings not to be translated.

So, yay for collaborative work! Oh, and FWIW — Thanks to everybody who helped. And really, really, really, hats off to Luis Enrique Amaya, a friend whom I see way less than I should. A LIDSOL graduate, and a nice guy all around. Why to him specially? Well... This has several wrinkles to iron out, but, by number of translated lines:

  • Andrés Delgado 195
  • scannopolis 626
  • Leo Arias 812
  • Gunnar Wolf 947
  • Luis Enrique Amaya González 3258

...Need I say more? Luis, I hope you enjoyed reading the book :-]

There is still a lot of work to do, and I'm asking the rest of the team some days so I can get my act together. From the mail I just sent, I need to:

  1. Review the Pandoc conversion process, to get the strings formatted again into a book; I had got this working somewhere in the process, but last I checked it broke. I expect this not to be too much of a hurdle, and it will help all other translations.
  2. Start the editorial process at my Institute. Once the book builds, I'll have to start again the stylistic correction process so the Institute agrees to print it out under its seal. This time, we have the hurdle that our correctors will probably hate us due to part of the work being done before we had actually agreed on some important Spanish language issues... which are different between Mexico, Argentina and Costa Rica (where translators are from).

    Anyway — This sets the mood for a great start of the week. Yay!

AttachmentSize
Screenshot from 2017-10-08 20-55-30.png103.1 KB

09 October, 2017 04:05AM by gwolf

October 08, 2017

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Free Software Efforts (2017W40)

Here’s my weekly report for week 40 of 2017. In this week I have looked at censorship in Catalonia and had my “deleted” Facebook account hacked (which made HN front page). I’ve also been thinking about DRM on the web.

Debian

I have prepared and uploaded fixes for the measurement-kit and hamradio-maintguide packages.

I have also sponsored uploads for gnustep-base (to experimental) and chkservice.

I have given DM upload privileges to Eric Heintzmann for the gnustep-base package as he has shown to care for the GNUstep packages well. In the near future, I think we’re looking at a transition for gnustep-{base,back,gui} as these packages all have updates.

Bugs filed: #877680

Bugs closed (fixed/wontfix): #872202, #877466, #877468

Tor Project

This week I have participated in a discussion around renaming the “Operations” section of the Metrics website.

I have also filed a new ticket on Atlas, which I am planning to implement, to link to the new relay lifecycle post on the Tor Project blog if a relay is less than a week old to help new relay operators understand the bandwidth usage they’ll be seeing.

Finally, I’ve been hacking on a Twitter bot to tweet factoids about the public Tor network. I’ve detailed this in a separate blog post.

Bugs closed (fixed/wontfix): #23683

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

08 October, 2017 10:00PM

Michael Stapelberg

Debian stretch on the Raspberry Pi 3 (update)

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • SSH host keys are generated on first boot.
  • Old kernel versions are now removed from /boot/firmware when purged.
  • The image is built with vmdb2, the successor to vmdebootstrap. The input files are available at https://github.com/Debian/raspi3-image-spec.
  • The image uses the linux-image-arm64 4.13.4-3 kernel, which provides HDMI output.
  • The image is now compressed using bzip2, reducing its size to 220M.

A couple of issues remain, notably the lack of WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome!

As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/. To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ bunzip2 2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ sudo dd if=2017-10-08-raspberry-pi-3-buster-PREVIEW.img of=/dev/sdb bs=5M

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is “raspberry”

08 October, 2017 08:45PM

hackergotchi for Joachim Breitner

Joachim Breitner

e.g. in TeX

When I learned TeX, I was told to not write e.g. something, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write e.g.\␣.

Years later, I learned from a convincing, but since forgotten source, that in fact e.g.\@ is the proper thing to write. I vaguely remembering that e.g.\␣ supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact e.g.\␣ is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that e.g.\@ is proper.

What now? I guess I just have to find it out myself.

The problem and two solutions

The problem and two solutions

The above image shows three variants: The obviously broken version with e.g., and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how \@ and \␣ react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;

Expanding whitespace, take 1

Expanding whitespace, take 1

Expanding whitespace, take 2

Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?

Expanding whitespace, take 3

Expanding whitespace, take 3

Expanding whitespace, take 4

Expanding whitespace, take 4

Again, I see the same behaviour.

Conclusion: It does not matter, but e.g.\␣ is less hassle when using lhs2tex than e.g.\@ (which has to be escaped as e.g.\@@), so the winner is e.g.\␣!

(Unless you put it in a macro, then \@ might be preferable, and it is still needed between a captial letter and a sentence period.)

08 October, 2017 07:08PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Daniel Pocock

Daniel Pocock

A step change in managing your calendar, without social media

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

08 October, 2017 05:36PM by Daniel.Pocock

hackergotchi for Ricardo Mones

Ricardo Mones

Cannot enable. Maybe the USB cable is bad?

One of the reasons which made me switch my old 17" BenQ monitor for a Dell U2413 three years ago was it had an integrated SD card reader. I find very convenient to take camera's card out, plug the card into the monitor and click on KDE device monitor's option “Open with digiKam” to download the photos or videos.

But last week, when trying to reconnect the USB cable to the new board just didn't work and the kernel log messages were not very hopeful:

[190231.770349] usb 2-2.3.3: new SuperSpeed USB device number 15 using xhci_hcd
[190231.890439] usb 2-2.3.3: New USB device found, idVendor=0bda, idProduct=0307
[190231.890444] usb 2-2.3.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[190231.890446] usb 2-2.3.3: Product: USB3.0 Card Reader
[190231.890449] usb 2-2.3.3: Manufacturer: Realtek
[190231.890451] usb 2-2.3.3: SerialNumber: F141000037E1
[190231.896592] usb-storage 2-2.3.3:1.0: USB Mass Storage device detected
[190231.896764] scsi host8: usb-storage 2-2.3.3:1.0
[190232.931861] scsi 8:0:0:0: Direct-Access     Generic- SD/MMC/MS/MSPRO  1.00 PQ: 0 ANSI: 6
[190232.933902] sd 8:0:0:0: Attached scsi generic sg5 type 0
[190232.937989] sd 8:0:0:0: [sde] Attached SCSI removable disk
[190243.069680] hub 2-2.3:1.0: hub_ext_port_status failed (err = -71)
[190243.070037] usb 2-2.3-port3: cannot reset (err = -71)
[190243.070410] usb 2-2.3-port3: cannot reset (err = -71)
[190243.070660] usb 2-2.3-port3: cannot reset (err = -71)
[190243.071035] usb 2-2.3-port3: cannot reset (err = -71)
[190243.071409] usb 2-2.3-port3: cannot reset (err = -71)
[190243.071413] usb 2-2.3-port3: Cannot enable. Maybe the USB cable is bad?
...

I was sure USB 3.0 ports were working, because I've already used them with a USB 3.0 drive, so first thought was the monitor USB hub had failed. It seemed unlikely that a cable which has not been moved in 3 years was suddenly failing, is that even possible?

But a few moments later the same cable plugged into a USB 2.0 worked flawlessly and all photos could be downloaded, just noticeably slower.

A bit confused, and thinking that, since everything else was working maybe the cable had to be replaced, it happened I upgraded the system in the meantime. And luck came into rescue, because now it works again in 4.9.30-2+deb9u5 kernel. Looking at the package changelog it seems the fix was this “usb:xhci:Fix regression when ATI chipsets detected“. So, not a bad cable but a little kernel bug ;-)

Thanks to all involved, specially Ben for the package update!

08 October, 2017 04:17PM

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Tor Relays on Twitter

A while ago I played with a Twitter bot that would track radio amateurs using a packet radio position reporting system, tweet their location and a picture from Flickr that was taken near to their location and a link to their packet radio activity on aprs.fi. It’s really not that hard to put these things together and they can be a lot of fun. The tweets looked like this:

This isn’t about building a system that serves any critical purpose, it’s about fun. As the radio stations were chosen essentially at random, there could be some cool things showing up that you wouldn’t otherwise have seen. Maybe you’d spot a callsign of a station you’ve spoken to before on HF or perhaps you’d see stations in areas near you or in cool places.

On Friday evening I took a go at hacking together a bot for Tor relays. The idea being to have regular snippets of information from the Tor network and perhaps you’ll spot something insightful or interesting. Not every tweet is going to be amazing, but it wasn’t running for very long before I spotted a relay very close to its 10th birthday:

The relays are chosen at random, and tweet templates are chosen at random too. So far, tweets about individual relays can be about age or current bandwidth contribution to the Tor network. There are also tweets about how many relays run in a particular autonomous system (again, chosen at random) and tweets about the total number of relays currently running. The total relays tweets come with a map:

The maps are produced using xplanet. The Earth will rotate to show the current side in daylight at the time the tweet is posted.

Unfortunately, the bot currently cannot tweet as the account has been suspended. You should still be able to though and tweets will begin appearing again once I’ve resolved the suspension.

I plan to rewrite the mess of cron-activated Python scripts into a coherent Python (maybe Java) application and publish the sources soon. There are also a number of new templates for tweets I’d like to explore, including number of relays and bandwidth contributed per family and statistics on operating system diversity.

Update (2017-10-08): The @TorAtlas account should now be unsuspended.

08 October, 2017 02:00PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 5.4 enters the embedded world

Since DebConf 17 I was working on cross-architecture support for FAI. The new FAI release supports creating cross-architecture disk images, for e.g. you can build an image for Arm64 (aarch64) on a host running 64-bit x86 Linux (amd64) in about 6 minutes.

The release announcement has more details, and I also created a video showing the build process for an Arm64 disk image and booting this image using Qemu.

I'm happy to join the Debian cloud sprint in a week, where more FAI related work is waiting.

FAI embedded ARM

08 October, 2017 01:05PM

October 07, 2017

hackergotchi for Chris Lamb

Chris Lamb

python-gfshare: Secret sharing in Python

I've just released python-gfshare, a Python library that implements Shamir’s method for secret sharing, a technique to split a "secret" into multiple parts.

An arbitrary number of those parts are then needed to recover the original file but any smaller combination of parts are useless to an attacker.

For instance, you might split a GPG key into a “3-of-5” share, putting one share on each of three computers and two shares on a USB memory stick. You can then use the GPG key on any of those three computers using the memory stick.

If the memory stick is lost you can ultimately recover the key by bringing the three computers back together again.

For example:

$ pip install gfshare
>>> import gfshare
>>> shares = gfshare.split(3, 5, b"secret")
>>> shares
{104: b'1\x9cQ\xd8\xd3\xaf',
 164: b'\x15\xa4\xcf7R\xd2',
 171: b'>\xf5*\xce\xa2\xe2',
 173: b'd\xd1\xaaR\xa5\x1d',
 183: b'\x0c\xb4Y\x8apC'}
>>> gfshare.combine(shares)
b"secret"

After removing two "shares" we can still reconstruct the secret as we have 3 out of the 5 originals:

>>> del shares['104']
>>> del shares['171']
>>> gfshare.combine(shares)
b"secret"

Under the hood it uses Daniel Silverstone’s libgfshare library. The source code is available on GitHub as is the documentation.

Patches welcome.

07 October, 2017 10:12AM

October 06, 2017

Scarlett Clark

KDE at #UbuntuRally in New York! KDE Applications snaps!

#UbuntuRally New York

KDE at #UbuntuRally New York

I was happy to attend Ubuntu Rally last week in New York with Aleix Pol to represent KDE.
We were able toaccomplish many things during this week, and that is a result of having direct contact with Snap developers.
So a big thank you out to Canonical for sponsoring me. I now have all of KDE core applications,
and many KDE extragear applications in the edge channel looking for testers.
I have also made a huge dent in also making the massive KDE PIM snap!
I hope to have this done by week end.
Most of our issue list made it onto TO-DO lists 🙂
So from KDE perspective, this sprint was a huge success!

06 October, 2017 08:50PM by Scarlett Clark

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in September 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10.5h. During this time, I continued my work on exiv2. I finished reproducing all the issues and then went on doing code reviews to confirm that vulnerabilities were not present when the issue was not reproducible. I found two CVE where the vulnerability was present in the wheezy version and I posted patches in the upstream bug tracker: #57 and #55.

Then another batch of 10 CVE appeared and I started the process over… I’m currently trying to reproduce the issues.

While doing all this work on exiv2, I also uncovered a failure to build on the package in experimental (reported here).

Misc Debian/Kali work

Debian Live. I merged 3 live-build patches prepared by Matthijs Kooijman and added an armel fix to cope with the the rename of the orion5x image into the marvell one. I also uploaded a new live-config to fix a bug with the keyboard configuration. Finally, I also released a new live-installer udeb to cope with a recent live-build change that broke the locale selection during the installation process.

Debian Installer. I prepared a few patches on pkgsel to merge a few features that had been added to Ubuntu, most notably the possibility to enable unattended-upgrades by default.

More bug reports. I investigated much further my problem with non-booting qemu images when they are built by vmdebootstrap in a chroot managed by schroot (cf #872999) and while we have much more data, it’s not yet clear why it doesn’t work. But we have a working work-around…

While investigating issues seen in Kali, I opened a bunch of reports on the Debian side:

  • #874657: pcmanfm: should have explicit recommends on lxpolkit | polkit-1-auth-agent
  • #874626: bin-nmu request to complete two transitions and bring back some packages in testing
  • #875423: openssl: Please re-enable TLS 1.0 and TLS 1.1 (at least in testing)

Packaging. I sponsored two uploads (dirb and python-elasticsearch).

Debian Handbook. My work on updating the book mostly stalled. The only thing I did was to review the patch about wireless configuration in #863496. I must really get back to work on the book!

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

06 October, 2017 08:30AM by Raphaël Hertzog