Dienstag, 15. Oktober 2013

Build specific targets of a sub-makefile (inspired by Gruntjs)

I recently had a great idea:
When I was using gruntjs a couple of months ago, I was introduced to the possibility of sub-targets there. Those were specified using colons (":") to seperate parent target and subtarget. Since I am using dedicated sub-makefiles for single modules in my project, I just had the brilliant idea: realize something similar to access goals from those. This saves me the trouble of specifying the file used and to define necessary environment variables by hand.

So what I came up with initially and what worked out quite well, was this:
# idea from http://blog.jgc.org/2007/06/escaping-comma-and-space-in-gnu-make.html 
# I put this in my dedicated defs.mk file for general usage
 ,             := ,
 space         :=
 space         +=
 $(space)      := 
 $(space)      +=
   
 $(PROJECT)\:%:  
      $(eval SUBGOALS := $(subst $(,),$( ),$*))  
      $(PRINT) making $(PROJECT) with goal\(s\) $(SUBGOALS)...  
      $(MAKE) -f "$(abspath $(ROOTDIR)/Make/$(PROJECT).mk)" $(SUBGOALS)  
      $(PRINT) done making $(PROJECT) with goal\(s\) $(SUBGOALS)  

Here is an example, of what the benefit of this is:

$: make modulexy:libxy,binxy
making modulexy with goal(s) libxy binxy...

There is one problem, though: While you CAN run makefile goals in any recursion level
(e.g. $: make goal1:goal2:goal3: ...)
only the first level allows you to pass the goals together to a single instance of make. Things like
"$: make goal1:goal2:goal3.1,goal3.2"
will be executed as
"make -f goal1file goal2:goal3.1 goal3.2"
That means, the command tries to run goal3.2 in goal1file (the file belonging to goal1).
The workaround would be
"make goal1:goal2:goal3.1,goal2:goal3.2"
which is then causing two instances of make being run on the makefile belonging to goal2: One for goal3.1 and one for goal 3.2.

So far, I did not feel the need to improve this (by adding a way to evaluate brackets, for example. Imagine "$: make goal1:goal2:(goal3.1,goal3.2)"). But if you are in a desperate need of this, it will be hard to achieve with make-internal tools only. sed will probably be your friend, here.
Contact me, if you come to a solution, so I can add it here.

See you soon!

Donnerstag, 10. Oktober 2013

GNU Make as templating engine (???)

Hello visitors!

So this is already my second post about GNU Make I am writing within two weeks! My intention this time is to describe how you can get all the power of the Makefile-parser into any usual, build-related file.

To give you a better image of what I wanted to achieve originally and why, let me give you this scenario:
Imagine you are starting to work on yet another c/c++ program for GNU and you are trying to use as much basic project stuff as possible from your previous project. You are copying over makefiles, licenses, other textfiles and launching scripts. After that, you alter the name of the last project everywhere to match your new one's - which is pretty inconvenient if done by hand. You know: These project specific strings occuring in standard files all over the place should be kept at only one place, namely your build system's configuration file.

In my last article I was calling this build system configuration file for make "defs.mk". Now the question is: How will we get the definitions from there into our text files?

With the help of the standard *nix tools "cat", "sed" and "printf" I managed to write a small copy-over routine for make which performs make's variable expansion on the text files we want to reuse in the future. Here is the code:

 DOCFILES        = $(DOCSRCDIR)/LICENSE.txt $(DOCSRCDIR)/README.txt  
 DOCDESTS    = $(subst $(DOCSRCDIR),$(DISTDIR),$(DOCFILES))  
 .PHONY    = all  
 all: $(DOCDESTS)  
 $(DISTDIR)/%: $(DOCSRCDIR)/%  
 # Get the file's contents into "DOCCONTENT". Don't forget to escape "#"  
 # by using sed -e 's/\#/\\&/g' and also get rid of troublesome newlines  
 # (replaced with \n which is re-translated by printf later)  
 # http://stackoverflow.com/questions/1251999/sed-how-can-i-replace-a-newline-n  
     @$(eval DOCCONTENT := $(shell cat $^ | sed -e ':a;N;$$!ba;{s/\#/\\&/g;s/\n/\\n/g}'))  
     @printf "$(DOCCONTENT)" > $@  

By reading the document into the makefile variable "DOCCONTENT" all currently set variables are being expanded (replaced) in the script before we write it back to its destination. DISTDIR is hereby our distributiable package folder.

Inside the document we are also able to use make's inbuilt functions, such as $(wildcard and even $(shell. This is a bit risky though, since almost everything can be done this way. At least noone will complain about lacking features, though :)

I hope this can be helpful for you!
Regards

Dienstag, 1. Oktober 2013

My idea of how make should be done

Welcome dear reader!

When I start spending time with a new programming language, I usually want to get a clear picture first of how my workflow should look like. I simply want to do it right from the beginning, using state-of-the-art tools and profiting from others' experiences, so I will be able to work efficiently right away and so that my project evolves around a good structure.

Currently, I am trying out C++ (not for the first time, but this time for real), denying myself using an IDE which would get me started quickly but rendering me entirely planless of what is going on behind the scenes.
Naturally, I was confronted right at the beginning with the decision which build system to use. I started out using the best known one, gnu make, and got comfortable with it. Actually it was even so good, that when I decided to go for cmake instead (which seems to be next to autotools the only  relativley widespread build tool out there), I couldn't tell any improvements over make.

So here I want to give you an overview of the conclusions I've drawn about how to use makefiles in a project-independent, yet absolutely clean manner:

  1. of all: The project's top-level structure:
    I wanted a clean top-level project directory with nothing in it but introductory documentation (license, readme...), one folder for source (src) and one to build into (build). Lateron, looking into other projects, I also saw that being able to offer support for several other (maybe ide-specific) build systems would be great, too, so I decided to give each one of them a separate directory - "Make" doing the first step here.
    That leads us getting something like:
    • project root
      • build
      • src
      • Make
      • ...
      • XCode/Eclipse etc.
    This way we can also strictly keep the build-tool related files from our code, as another positive sideeffect.

  2. how to even make it work:
    Now that we have decided to put all our makefiles into the Make directory, how will make get to see the files it has to build together? It is not even in the project's root directory. That's why we need to get this root directory first - the parent of the folder, our makefile resides in:
     ROOTDIR            = $(realpath $(dir $(realpath $(dir $(lastword $(MAKEFILE_LIST))))))  
    
    I am using here, that calling "realpath" on a directory name will cause the original path to be stripped of trailing slashes. Therefore, another call of "dir" on it will give me the parent directory. I know, this is somewhat hacky, but we just won't mind.
    So next thing is to export our ROOTDIR variable, so that all other makefiles can use it et voilà: Accessing specific directories far away from our "Make" directory won't be a problem any more.

  3. how to reuse our makefiles:
    I far as I could see, it is very common to extend a build system as the actual project grows with the time. The downside of this procedure is obvious: Makefiles written this way tend to be project specific and can hardly be reused. So we need ways to separate project specific attributes from the reoccuring parts of a makefile.
    First thing I'd recommend here is a dedicated defs.mk makefile containing all or most of your definitions. This way you will have all your project-dependent stuff just there and this will be easy to understand for external collaborators. Note that my make_boilerplate contains a nice draft of such a defs.mk file which you can use and modify as you like to.
    In the future, we will just include our definition file to gain access to our whole project's configuration:
     include $(ROOTDIR)/Make/defs.mk  
    
    Yes, I do know about the "MAKEFILES" environment variable, but I think, this way it is just more transparent. So let's just do it this way.

  4. recursive makefiles(?):
    Coming from Java WITH IDEs, I had a pretty detailed vision of how I wanted my project to be structured. Over all, folders play a central role in dividing pieces of code into single units of related functionality. But how to tell make that those source files, distributed over several hierarchies of folders, have to be built into one single binary?

    Recursive makefiles - while seeming obvious at first (icu, for example, is doing it this way) - didn't comply with my ideas stated above, since they had to lie in the source directory. Additionally, it relatively quickly turned out, that recursive use of makefiles is actually not desireable. Click here for details.

    So I went for good old "find" to get my .cpp files which I would then turn into a list of .o files:
     CXXFILES       = $(shell find $(SRCDIR) -type f -name '*.cpp')  
     CXXOBJECTS     = $(patsubst $(SRCDIR)/%.cpp,$(OBJDIR)/module/%.o,$(CXXFILES))  
    
    Those, as you can see, I put into their respective subdirectory in my object build directory.

  5. where to divide makefiles:
    When working with makefiles in a project, you often have them grow according to your needs while you are concentrating on your code. Therefore one often ends up having huge and unreadable makefiles that are likely to break on minimal changes. I suggest the following here: Have one makefile for each big part of your project. This will not contradict our earlier statement "recursive make considered harmful", since the makefile is still as self-contained as possible and should not call any sub-makes either. We just use the modularity of our project to have clean cuts. I suggest one makefile for every lib or binary of your own project, maybe one makefile for each of your dependencies if they have to be built from source or one single makefile together for all your prebuilt libraries.
    Standard targets that can be "outsourced" include:
    • deps.mk (for dependencies)
    • tests.mk
    • <projectname>.mk for your project

  6. handling indirect dependencies:
    We know that make works using a file's dependencies to create the file itself, if those are newer. Therefore we have to specify the dependencies, which we already automated. But we completely left out the fact, that header files included in our .cpp files are also dependencies.
    In order to be able to respond to changes made to only those included files, we will use a trick which I borrowed from here: We exploit the -M option (for gcc and similar), which gives us all files included by a codefile and write them into a seperate makefile, which we will include from now on. These secondary makefiles we will be giving the suffix ".d", as in "dependency". So the rule for .o files will be looking like 
     $(OBJDIR)/%.o: $(SRCDIR)/%.cpp  
         @mkdir -p $(dir $@)  
         @$(COMPILE.cxx) $@ $^  
         @# Create the .d file using gcc's -M or -MM option  
         $(CXX) $(CPPFLAGS) $(CXXFLAGS) -MM $(lastword $^) > $(patsubst %.o,%.d,$@)  
    
    Additionally, we will have to retrieve the existing .d files at the beginning of our makefile and include them:
     DEPFILES        = $(CXXOBJECTS:.o=.d)  
     -include $(DEPFILES)  
    
    The "-" in front of include turns off errors on file-not-found. And that's it!
    Make sure though that you include the DEPFILES only after the "all" target, so some object file won't become your default target.

  7. coping with different levels of verbosity:
    In general it has to be said, that make's philosophy can be a bit obstructive when it comes to communicating what is going on. First of all, there is no parse-time option to output messages to the console. Trying it with approaches like "$(shell echo some message)" will fail because make' s goal here is it to grab the shell command's output. So you will not be able to just dump all variable values of interest on make startup before a target is built. You will rather have to have an own "info" target doing that, which is prerequisite of your "all" target or you are calling yourself.

    The next problem is, that when you have a simple target that just aggregates other tasks, and you want to announce its start and its end, at least the first of those two cannot be achieved. For example, imagine you want to make sure, all build directories are set up correctly in target "directories: dir1 dir2 dir3". Where will you put the "Start creating directories" message? You will have to live with that limitation or invent a smart way to avoid this.

    To have control over what is printed and when, I suggest setting up variables like "PRINT" (echoing always) and "VPRINT" (echo if in verbose mode, otherwise just /bin/true). This can be extended to "VVPRINT" (VERBOSE = 2) etc. if necessary.
    Additionally, the make variable "MAKECMDGOALS" can be helpful to detect if the user ran make with the intention of getting more information as usual about the build process. I, for example, did the following right at the beginning of my main Makefile: 
     ifeq ($(filter info, $(MAKECMDGOALS)),info)
         VERBOSE = YES
         export VERBOSE # for all sub-makes
     endif
    
On these thoughts as a basis, I set up a small github repository I called "make_boilerplate". You will find it by following https://github.com/suluke/make_boilerplate . Maybe things will also be clearer if you take a look into the makefiles themselves.

I've been reading lots and lots of articles and stackoverflow questions on the internet to get all this done in a way I personally can live with. I don't guarantee you, though, that it is bugfree or even trapless. I see a high chance that some professional with 40 years of experience shows up and tells me about a bunch of flaws my makefile design comes with. Personally I don't see any problems so far though - and this is why I posted this. I hope, you will find it helpful, too.

Regards

suluke


Sources:

Sonntag, 22. September 2013

The best software to view and control Mac/Windows/Linux desktops on android remotely

Hello visitor!

In this article I want to show and review several ways and apps to view and control your desktop remotely via your android phone or tablet.

[Motivation]
 There are several reasons why one would want to control a pc remotely over the network. For example there is

  • You want to control that updates are correctly installed and then shut the computer down while you're still away from home
  • You want to make sure, steam games are downloaded properly so you can start playing as soon as you're home again
  • You want to help out a friend or family member with their computer without having to get there physically. Also, you're too lazy to start up your own pc.
  • You simply like the idea of having your desktop's complete power in your hand.
No matter what you want to achieve by remotely controlling your desktop with an android device, there are plenty of ways to get it done. Unfortunately though, it becomes hard to decide which of the apps and connection protocols will suit you the best.


[Defining some requirements]
  1. First of all, the price is a very important aspect. Personally, I have come to the conclusion, that although free apps have no effect on your wallet if turning out to be crap, mostly you don't want to have them as tools to rely on every day for the following reasons:great likelyhood of support/development being suddenly dropped or missing from the start
    • lacking features of the paid alternatives
    • the app showing a lot of annoying advertisement
    • unfriendly user interfaces
    On the other hand, most people out there will not use the app on a regular basis, so the price should be moderate.
    So we cherish: The app we are looking for should offer a free demo which can be upgraded to a fully-featured, ad-free version for less than 10$ or €
  2. The solution has to be as easy to be set up as possible, so we can expect inexperienced users to set up the desktop's control server by just pointing them to its website. Therefore, the server also has to be free.
  3. Personally, I am using Linux. At work, I have to maintain iMacs. And most people I know are using Windows. The solution I am looking for has to support all these three platforms.
[The competitors]
Being asked about remote desktop sharing, most people will probably immediately think of the following two keywords:
  1. TeamViewer (software product)
  2. VNC (protocol)
which both offer Android apps.
Then, after a little bit of research, one will additionally find the following solutions:
  1. RDP (protocol)
  2. Splashtop (software product)
  3. Google Chrome Remote Desktop
  4. Skype screen sharing
of which only the former two support Android with dedicated apps. [Sidenote: in the future, we can expect Chromoting]

[General opinions]
So here are my opinions on the remaining four:

  1. From my personal experience, TeamViewer performs pretty bad on Linux and the framerates are still far from fluid, even if one gets it to work. Therefore I will not cover the program here. Nevertheless, I encourage everyone to download the software himself and test it, for it is free.
  2. VNC: The probably best supported protocol out there for our task, but I will drop covering it here in favor of rdp for reasons I am explaining later
  3. RDP: Comes preinstalled with Windows, so nothing to set up here. It will also perform better than VNC on Windows servers, for it is able to communicate unalterable components of the user interface such as the desktop background, standard widgets used etc. to generate less traffic on disply changes. A servers also exists for Linux (xrdp) and is available to enterprise customers from aquaconnect - which I couldn't test. Actually I thought there was a free server, but I was mistaken as I found out while writing this.
  4. Splashtop is a one-manufracturer software product bundling desktop server and apps. It isn't quite cheap if it isn't on sale and offers no demo, but from personal experience, I can only recommend this app, especially when it comes to multimedia. This is because frame rates are almost excellent and it can transmit audio - even on Linux. There are several versions, of which one requires a one-time payment while most of the others are given away for free as long as you have signed a service contract for the splashtop remote service.
[My RDP app recommendation]
If you are on Mac OS, you already have VNC integrated - called ARDP. I'm afraid I will have to leave you alone picking an app that suits your needs, since I focused on RDP.
For all the others, it is probably a good bet to go for rdp as you will have the faster protocol. In the play store, there is a ton of apps to choose from. I tested the following:

of which I liked Remote RDP by Yongtao Wong the most - even in the free version (limited to 1 server), directly followed by aRDP by Unandtech, which is even opensource, by the way. Both perform with decent framerates and mostly efficient and intuitive user interfaces. In addition, both can transmit arrow keys (for example via hacker's keyboard) as far as I remember, which is often helpful if tapping is too unprecise.

This said, you should be ready to go by now, into the wilderness of the countless rdp apps out there.
If you are willing to pay a little more to invest in a closed system which therefore performs a lot better, I kindly remind you of Splashtop Remote.

Cheers,

suluke

Montag, 9. September 2013

[Rezension] Snakebyte iDroid:Con Gamecontroller

Hallo Fremder!

Kürzlich habe ich mir den iDroid:Con Gamecontroller der Firma Snakebyte gekauft. Den Link zum Produkt findet sich hier.

Ich hatte den Controller hauptsächlich mit der Intention gekauft, ihn an meinem PC zum Spielen zu benutzen - aber auch, um die Option zu haben, ihn an meinen Mobilgeräten zu verwenden. Nach etwa 2 Wochen täglicher Benutzung (es sind Semesterferien :D) will ich nun meine Erfahrungen mit anderen Interessenten teilen. Da ich durchgehend Android Geräte besitze habe ich allerdings keine Erfahrungen mit iOS machen können.

1. Verpackung, Lieferumfang, Inbetriebnahme
Die gelieferte Box ist kompakt, mit einem Sichtfenster und aus dünnem Karton. Neben Controller und mini-USB Kabel liegen nur zwei Zettelchen dabei: Einmal Werbung für weitere Snakebyte Produkte, zum anderen die relative klein ausgefallene Bedienungsanleitung (auch online verfügbar, ich hab sie schon verloren). Ok, viel zu verstehen gibt es nicht, aber ich war ein bisschen in meinem Stolz verletzt, für die allergrundlegenste Operation bereits dieses Blatt konsultieren zu müssen: Die Verbindung mit dem zu steuernden Endgerät. Der Trick ist hierbei, die Powertaste zusammen mit der für den gewünschten Controllermodus zuständigen Knopf gedrückt zu halten, bis das Gerät schnell blinkt und damit es ab nun vom Spielgerät erkannt werden kann.
Das beigelegte Kabel, welches lediglich zum Laden dient, ist ausreichend lang um es dauerhaft an einem Rechner zu haben. Somit kann man relativ gut eine permanente Stromversorgung des Controllers realisieren.

2. Verarbeitung
Für den relativ geringen Preis bekommt man einen anständig verarbeiteten Controller - er könnte sich wertiger anfühlen, billig wirkt er aber auch nicht. Das liegt wohl an dem matten Plastik, welches verwendet wurde. Hauptsächlich stören die vier Hauptknöpfe (A,B,X,Y) und das Steuerkreuz, da sich deren Material irgendwie weniger wertig in das Gesamtbild einfügt. Zudem lassen sich die Knöpfe in ihren Löchern horizontal bewegen und sie machen beim wieder hochkommen ein klick, der doch deutlich hörbar ist (mich aber nich weiter stört. Tastengeräusche gehören ja irgendwo zum Spielen dazu)

3. Nutzung
[Positiv]
Zuerst das Positive: Der Controller wurde von mir unter (Arch-)Linux, Windows 8 und Android getestet, wobei ich allerdings nicht alle Verbindungsmodi ausprobiert habe.
Unter Linux beispielsweise spielte ich Psychonauts, wo sich der 4 Achsen/12 Tasten Modus besser machte, als der 5-Achsen Modus, weil das Spiel einfach nichts mit den analogen Schultertasten anfangen konnte. Die Verbindung ließ sich leicht (über blueman) einrichten, wobei allerdings nach Nutzung des Controllers an einem anderen Gerät/anderem Betriebssystem eine komplette Neukoppelung (entfernen aus "Bekannt"-Liste & neu hinzufügen) nötig war. Darauf wird auch in der Bedienungsanleitung hingewiesen.

Unter Android habe ich die Maus-Steuerung getestet, da ich insgesamt nicht der größte Fan von First Person Shootern auf mobilen Geräten bin. Diese funktionierte, wenn auch Wischgesten sehr unelegant sind. Das liegt aber am Betriebssystem.

Unter Windows 8 spielte ich Burnout Paradise, wo ich die Schultertasten analog benutzen konnte (wie bei Rennspielen auf der XBox). Hier klappte aber seltsamerweise das starten von Rennen über linke Schultertaste + rechte Schultertaste nicht, sodass ich immer zur Tastatur rücken musste, um a+y zu drücken.

Probleme mit etwaiger Verzögerung über die Funkverbindung habe ich nicht ausmachen können. Wobei Jump'n'Run und Rennspiele da wahrscheinlich nicht so sehr von beeinträchtigt werden.
Auch habe ich keine nennenswerte Probleme mit Schmutz auf dem Controller. Aber ich benutze ihn ja auch noch nicht lange. Hygienisch ist ohnehin, den Controller regelmäßig zu reinigen.

[Negativ]
Generell ist aber die Benutzung in Windows 8 am nervigsten. Das Betriebssystem bekommt es nicht hin, eine erneute Verbindung zum Controller herzustellen (ich bin mir relativ sicher, dass das nicht nur gerätewechselbedingt ist), sodass man den Controller immer wieder aus der Geräteliste entfernen muss. Dann braucht Windows nochmal bestimmt 30 Sekunden, um sich mit dem Gerät "anzufreunden", also Treiber zurechtzurücken oder was weiß ich. Windows und seine Ladebalken...
Zudem ist es etwas blöd, dass der Controller schon nach relativ kurzer Zeit von alleine in den Ruhemodus geht. Ok, ich schätze es sind jedes mal 15 Minuten die er wartet, aber dann muss ich mich unter Windows 8 wieder mit dem genannten Verhalten herumschlagen. Oder unter Linux muss ich das Spiel kurzzeitig beenden, weil man aus Psychonauts nicht heraus"tabben" kann. Wahrscheinlich ist das aber alles Bluetooth-bedingt oder ließe sich beheben, wenn ich meine Geräte permanent sichtbar hätte. Dann könnte der Controller einen eigenen Verbindungsversuch machen. Oder ich bin selbst schuld, irgendwo eine "automatisch mit bekannten Geräten verbinden" Option (unter Linux) übersehen zu haben.
Als letztes muss ich noch den Umgang mit dem Akkustand kritisieren: Soweit ich sehen kann fehlt dieser schlicht und ergreifend. So ist mir mitten im Spiel einmal der Controller ohne Vorwarnung gestorben.

[Fazit & Bewertung]
Als Fazit würde ich also darauf hinweisen, dass bei diesem Wireless Controller viel Komfort einer plug'n'play Lösung mit Kabel verlorengeht. Außerdem muss man auf die Stromversorgung achten (und den auto-Standby). Der Vorteil des ganzen ist die plattformübergreifende Verwendbarkeit und die Unabhängigkeit von Kabeln.
Weil der Controller für mich seinen Zweck erfüllt und seine Problemchen größtenteils auf das Hauptmerkmal "Bluetoothverbindung" zurückzuführen sind, gibt's von mir 4 Sterne von 5, mit einem Stern Abzug für fehlende Akkuanzeige. Der Preis macht dabei das äußerliche wett.

Ich hoffe, diese Einschätzung hilft da draußen jemandem. Über Feedback würde ich mich wie immer freuen.

Viel Spaß beim Zocken!

Sonntag, 8. September 2013

[Steam][SDL][OpenAL] Stuttering sound in many Linux games

Hello folks.
On my Linux system (up-to-date Arch Linux) I was having trouble with several open-al based games, namely Psyschonauts, Super Hexagon and FTL/Faster Than Light. All games were purchased via Steam. The problem was, that the games' sounds came out somehow garbled - stuttering to be more precise. All I could do was restarting the game once or twice and hope that it would finally do as intended. As there are several components involved and common to the stuttering sound (SDL, OpenAL, Steam, Pulse Audio, Linux), it was not easy for me to put the problem into words google could understand.

After doing a lot of research with no results of which none of them solved my problems, I finally stumbled about a small, german  forum post, that seemed somehow to be connected to this issue. It also mentioned high cpu load when these problems occur, which I could confirm espacially while playing FTL. The post was suggesting that it might help to go into Mixer/ Pavu Control, take the particular audio stream of the game (for steam games it is usually a steam-connected strem), and turn it down a little - from 100% to let's say 90%. And voilà: After following this advice I seem to have no troubles left with those games.
If you don't have a program calles "Pavu control" or Pulse audio Mixer yet, it is almost certain that you will find it in your distribution's repository.
And that's it!

But wait - what happened? As the mentioned post suggests, it is a problem of the sound overmodulating. I guess, there is an (artificial) limit set to the physical outgoing sound volume, that causes sounds which are too loud to be filtered. Since this is probably an expensive operation, intercepting the audio stream, filter it and continue to play it again, this will presumably cause the high cpu usage noticed before. I don't know why, but openAL seems to cause a lot of those too-high sounds to be generated (at least the version used by steam).
Now, by setting the limit of the virtual stream lower than before, this problem gets solved.

Happy playing!

Donnerstag, 29. August 2013

Linux dual monitor: Don't stretch fullscreen over all monitors in SDL games (psychonauts, ftl, superhexagon)

[EDIT] Some time after I wrote this, I also found this article:
http://www.maketecheasier.com/run-fullscreen-games-in-linux-with-dual-monitors/
which is a lot nicer written.

Hello everybody!

I'm using two monitors on my Linux machine: The 13" 1366x768px inbuilt one of my laptop and my 24" full hd monitor. I don't need to say, that those do not really match to display one application over both of them. Unfortunately though, many games based on the well known sdl library try to do exactly this: When in fullscreen, they only offer me resolutions like "3286x1080" after the notebook's resolution (nothing in between). When opening in windowed mode, they calculate the middle of the screen by taking both screens into consideration, thus opening between them.
All I want to do, though, is having fullscreen games opening with full hd resolution on my external monitor, and windowed games in the middle of just this monitor.

I was quite happy lately, when I discovered that wine/crossover will open all windows on the external monitor if I tell xrandr, that it is my "--primary" screen, using "xrandr --output HDMI-0 --primary". This doesn't work for native, sdl-based games of course, although it kind of fits in here, so I thought to mention it.

So, today I got my bluetooth controller and I really wanted to play psychonauts with it. It, too, featured the said issue, and this time I just didn't want to give up. After some heavy googling, I finally came up with the SDL_VIDEO_FULLSCREEN_HEAD variable mentioned HERE (archwiki ftw!), and -thank god- it worked as expected. So for you it is:
$: SDL_VIDEO_FULLSCREEN_HEAD=0 ~/.steam/root/SteamApps/common/Psychonauts/Psychonauts

I hope this helps someone out there. I wish happy gaming.
Sincerely

suluke

Related: 
A topic on a mailing list also discussing a similar solution can be found here: http://icculus.org/pipermail/psychonauts/2012-June/000000.html