Google Play Services

Your android is old but the apps are new, and you love it? Google’s new game – Google Play Services

Android 4.3 was released to Nexus devices a little over a month ago, but, as is usual with Android updates, it’s taking much longer to roll out the general public. Right now, a little over six percent of Android users have the latest version. And if you pay attention to the various Android forums out there, you may have noticed something: no one cares.

4.3’s headline features are a new camera UI, restricted user profiles, and support for new versions of Bluetooth and OpenGL ES. Other than the camera, these are all extremely dull, low-level enhancements. It’s not that Google is out of ideas, or the Android team is slowing down. Google has purposefully made every effort to make Android OS updates as boring as possible.

Why make boring updates? Because getting Samsung and the other OEMs to actually update their devices to the latest version of Android is extremely difficult. By the time the OEMs get the new version, port their skins over, ship a build to carriers, and the carriers finally push out the OTA update, many months pass. If the device isn’t popular enough, this process doesn’t happen at all. Updating a phone is a massive project involving several companies, none of which seem to be very committed to the process or in much of a hurry to get it done.

Since it’s really hard to push out an Android update, Google’s solution is to sidestep the process completely. The company stopped putting all the good stuff in Android updates. It’s not that good stuff isn’t coming out at all, the exciting features are just not being included as part of a big Android release.

This year’s Google I/O was a show of force for this new delivery concept. No new Android version was at the show, yet Google announced Google Hangouts, Google Play Games, cloud saving of game and app data, a complete redesign of Google Play Music and Google Maps, a new version of the Google Maps API, and new location and activity recognition APIs. Post I/O, we’ve seen seemingly OS-level features added like the Android Device Manager, a remote wipe and device tracking system, without needing to touch the base OS.

It’s such a simple idea: Android updates roll out too slowly, so start releasing all the cool stuff separately. The hard part is making it actually work. But the first reason this is now possible is a little app that has finally come of age: “Google Play Services.”

Enlarge / Google Play Services can do whatever it wants.

Calling Play Services an “app” doesn’t really tell the whole story. For starters, it has an insane amount of permissions. It’s basically a system-level process, and if the above list isn’t enough for whatever it needs to do next, it can actually give itself more permissions without the user’s consent. Play Services constantly runs in the background of every Android phone, and nearly every Google app relies on it to function. It’s updatable, but it doesn’t update through the Play Store like every other app. It has its own silent, automatic update mechanism that the user has no control over. In fact, most of the time the user never even knows an update has happened. The reason for the complete and absolute power this app has is simple: Google Play Services is Google’s new platform.

Enlarge / What happens when you try living without Google Play Services.

Andrew Cunningham looked at this shortly after Google I/O, but now things are truly crystallizing. Google’s strategy is clear. Play Services has system-level powers, but it’s updatable. It’s part of the Google apps package, so it’s not open source. OEMs are not allowed to modify it, making it completely under Google’s control. Play Services basically acts as a shim between the normal apps and the installed Android OS. Right now Play Services handles the Google Maps API, Google Account syncing, remote wipe, push messages, the Play Games back end, and many other duties. If you ever question the power of Google Play Services, try disabling it. Nearly every Google App on your device will break.

Play Services supports over the entire Android install base.

The reason for all the permissions and sneaky updates is best illustrated in that chart above. While the latest version of Android is on six percent of devices, Play Services rolls out to everyone in a week or two and works all the way back to Android 2.2. That means any phone that is three years old or newer has the latest version of Google Play Services. According to Google’s current Android statistics, that’s 98.7 percent of active devices. So at Google I/O, when Google announced their slew of new APIs, nearly every Android device was immediately compatible in a week. Play Services is a direct line from Google to the core of your phone, and, really, no one outside of Google is quite sure of just how powerful it can get.

Google Play Services takes care of lower-level APIs and background services, and the other part of Google’s fragmentation takedown plan involves the Play Store. Google has been on a multi-year mission to decouple just about every non-system app from the OS for easy updating on the Play Store. Take a quick look at Google’s Play Store account and you’ll see a huge list of apps, many of which ship by default in Android. Gmail, Maps, Search, Chrome, Calendar, the keyboard, YouTube, and even the Play Store itself are all separately updatable.

The above list is a good representation of the current update situation in Android. Nearly everything that can be moved out of the main OS has been. The only features left that would require an OS update are things like hardware support, Application Frameworks APIs, and Apps that require a certain level of security or access (like the lock screen, Phone, and Settings apps).

This is how you beat software fragmentation. When you can update just about anything without having to push out a new Android version, you have fewer and fewer reasons to bother calling up Samsung and begging them to work on a new update. When the new version of Android brings nothing other than low-level future-proofing, users stop caring about the update.

This gets even more interesting when you consider the implications for future versions of Android. What will the next version of Android have? Well, what is left for it to have? Android is now on more of a steady, continual improvement track than an all-at-once opening of the floodgates like we last saw with Android 4.1. It seems like Google has been slowly moving down this path for some time; the last three releases have all kept the name “Jelly Bean.” Huge, monolithic Android OS updates are probably over—”extinct” may be a more appropriate term.

Not having to package everything into a major OS update means Google can get features out to more users much faster and more frequently than before. Android feature releases can now work just like Google’s Web app updates: silent, continual improvement that happens in the background. Your device is constantly getting better without your having to do anything or wait for a third party, and developers can take advantage of new APIs without having to wait for the install base to catch up. This should all lead to a more unified, less fragmented, healthier Android ecosystem.

Source – Arstechnica

PHP Dynamic page

How to create dynamic pages with php

This tutorial will show you how to create dynamic pages with php. As you know php is the best language to create dynamic website and today we are going to see how it’s done.

Step 1: Create files

First we need few separate pages so let’s go ahead and create few pages. First page we need is index.php that will be our main page, so create a page and name it index.php. After that we need to add some HTML to the page.

<html>
<head>
<title>My PHP Site</title>
</head>
<body>
<h1>Welcome to my PHP Site.</h1>
 <ul>
 <li>Products</li>
 <li>Blog</li>
 <li>About Us</li>
</ul>
</body>
</html>

This is going to be our main page that’s why we are putting our navigation here, it doesn’t look fancy because our main focus is PHP right now.

Add Links:

Now we need to add links to our navigation. As we have three items in our navigation so we will need three more pages for our website but first let’s add some links to our navigation items.

<ul>
 <li><a href="#">Products</a></li>
 <li><a href="#">Blog</a></li>
 <li><a href="#">About Us</a></li>
</ul>

For now we will not specify any path in our links, we will get back to it later. Now if you have done everything alright you will get a page like this:

Dynamic PHP pages

PHP Dynamic Page

Other pages:

Now that we are done with our main page, we want to create few other pages. So just create three pages and name them products.php , blog.php and about.php. After that we want to put some content in these pages, we don’t have to put all those HTML tags again. We will just have to put the content that we want to show on the website, so for product’s page put the following lines in the page.

<?php
echo "<h2>Products</h2>";
echo "This is our product page.<br/>";
echo "You should be able to see all of our products here.";
?>

Do same for other two pages. You can put in there whatever you like, just keep in mind one thing that you won’t have to put <html> tags again.

Step 2: Making Pages Dynamic

As we are done with our main page and we have created other pages, now its turn to make those pages dynamic. We will just have to change the url to get the requested page and after that use GET variable to catch the value and assign it to another variable. Then we can just print that variable and we will get all the content from other pages on our main page.

Add this php code on top of your main page:

<?php
 if(isset($_GET['page'])){
 $page = $_GET['page'];
 }
 else
 $page = NULL;
 ?>

So basically what we have done here is, we have checked if the value in GET is set or not and if it is set we have assigned the value in GET to page variable and if it is not set we have set page variable equal to NULL.
Secondly we need to add some code to our main page where we want to show the content of other pages. As we need to show the content of other pages after our navigation so put these line after closing ul tag.

<?php
 if(empty($page)){
 echo "This is our main page.";
 }
 else
 include($page);
 ?>

Here we are checking if the page variable is empty then show the content of main page and if the page variable is not empty then we will include page variable. Basically the page variable contains our page which we want to show on our main page, don’t worry it will all make sense in a minute.

Changing Links:

After our main page is all set the last thing we need to do is to change our link in the main page. So just make these changes in the links of main page.

<ul>
 <li><a href="index.php?page=products.php">Products</a></li>
 <li><a href="index.php?page=blog.php">Blog</a></li>
 <li><a href="index.php?page=about.php">About</a></li>
 </ul>

As you can see all these pages are referring to the index.php but here our concern is the part after the index.php that is the part after ?. After ? we are using a variable or you can say reference name ‘page’, you can use any name here but you will have to change the GET variable too. We are taking page equal to our required page.

After it’s all set you can try it and hopefully it will work.

Image Not Available

PHP Dynamic Page

Image Not Available

PHP Dynamic Page

Image Not Available

PHP Dynamic Page

Step 3: Improving Links

You can take it one step farther by modifying the code little bit and making your link more attractive instead of index.php?page=about.php. So what you need to do is add a line right after where you are assigning GET variable value to page in index page.

<?php
 if(isset($_GET['page'])){
 $page = $_GET['page'];
 $page .= '.php';  //Add this line here and you are all set.
 }
 else
 $page = NULL;
 ?>

Now last thing you want to do is to remove the .php extension from all the links and after that our links will become:

<ul>
 <li><a href="index.php?page=products">Products</a></li>
 <li><a href="index.php?page=blog">Blog</a></li>
 <li><a href="index.php?page=about">About</a></li>
 </ul>

Now when you will browse to any of these links it will not show .php and your link will look more lively like index.php?page=about. We are done and now you can play a little bit more if you want with your pages. Go ahead and try this and leave a comment if you get stuck anywhere or want to share something new.

CSS Pre-processors - LESS, SASS, Stylus

The problem with CSS pre-processors

CSS Pre-processors - LESS, SASS, Stylus

CSS Pre-processors – LESS, SASS, Stylus

I’ve been considering to use a CSS pre-processor like SASSLESSStylus, etc, for a very long time. Every time someone asked me if I was using any of these tools/languages I would say that I’m kinda used to my current workflow and I don’t really see a reason for changing it since the problems those languages solves are not really the problems I’m having with CSS. Then yesterday I read two blog posts which made me reconsider my point of view so I decided to spend some time today studying the alternatives (once again) and porting some code to check the output and if the languages would really help to keep my code more organized/maintainable and/or if it would make the development process easier (also if they evolved on the past few years).

It takes a couple hours for an experienced developer to learn most of the features present on these languages (after you learn the first couple languages the next ones are way easier) but if you have no programming skills besides CSS/HTML and/or don’t know basic programming logic (loops, functions, scope) it will probably take a while, the command line is another barrier to CSS/HTML devs… But that isn’t the focus of this post, I’m going to talk specifically about overused/misused features. I will try to explain the most common problems I see every time someone shows a code sample or I see a project written using any of these languages/pre-processors.

Mixins

What are mixins?

Mixin is a common name used to describe that an object should copy all the properties from another object. To sum up a mixin is nothing more than an advanced copy and paste. “All” the famous pre-processors have some kind of mixin.

Dumb code duplication is dumb

Following the SCSS syntax (sass), a mixin can be described and used as:

@mixin error {
    color: #f00;
    border: 2px solid #fc0;
}

.error-default {
    @include error;
}

.error-special {
    @include error;
    background-color: #fcc;
}

Which will compile to:

.error-default {
    color: #f00;
    border: 2px solid #fc0;
}

.error-special {
    color: #f00;
    border: 2px solid #fc0;
    background-color: #fcc;
}

Note that the properties are duplicated, which is very bad, file size will increase a lot and overall performance will also be degraded if not used with care. – Imagine that on a large project with thousands of lines of code, the amount of duplicated code will beunacceptable (by my standards).

This problem isn’t specific to SASS, it is also present on LESS and Stylus and any other language/pre-processor which supports the same feature, by having a new layer of abstraction the developer won’t realize he is creating code that has lots of duplication…ALWAYS gzip CSS and JS files!! gzip is really good at compressing duplicate data, so this problem might be irrelevant/nonexistent in production code, just beware that the generated CSS will get harder to maintain in case you or future devs for some reason decide to stop using a pre-processor and simply update the generated CSS (maybe they don’t have access to the source files or have no experience with a pre-processor).

Extend

LESS and Stylus doesn’t have support for anything similar to an extend, that’s why I picked SCSS (Sass) to write the code samples. A extend is like a “smarter mixin”, instead of copying and pasting the properties it will set the properties to multiple selectors at once.

.error {
    color: #f00;
    border: 2px solid #fc0;
}

.error-default {
    @extend error;
}

.error-special {
    @extend error;
    background-color: #fcc;
}

Which will compile to:

.error, .error-default, .error-special {
    color: #f00;
    border: 2px solid #fc0;
}

.error-special {
    background-color: #fcc;
}

Way closer to what a normal person would do manually… “Only” use mixins if you need to pass custom parameters. If you see yourself using the same mixin multiple times passing the same values than you should create a base “type” that is inherited by other selectors. – Compass (nice SASS framework) have a lot of mixins which I think should be base classesinstead.

Extend isn’t enough

Note that extend avoids code duplication but it also causes other problems, the amount of selectors can become an issue, if you @extend the same base class multiple times you may end up with a rule that have thousands of selectors, which won’t be good for performance either and can even make the browser to crash.

Another issue is that every class you create to be used only by @extend is going to be included on the compiled file (even if not used) which can be an issue in some cases (name collisions, file size) and makes this process not viable for creating a framework likecompass.

I really wish that SASS improves the way that @extend works (and that the other pre-processors also implements a similar feature) so we could create many base classes for code reuse but don’t necessarily export them. Something like:

@abstract error {
    color: #f00;
    border: 2px solid #fc0;
}

.error-default {
    @extend error;
}

.error-special {
    @extend error;
    background-color: #fcc;
}

Which would compile to:

.error-default, .error-special {
    color: #f00;
    border: 2px solid #fc0;
}

.error-special {
    background-color: #fcc;
}

PS: I know this kind of feature was already proposed before.

Another problem is if you mix nested selectors with @extends it might also causeundesired side-effects.

Extend and mixins can be bad for maintenance

Contrary to the common knowledge, extending other classes and creating mixins can degrade maintenance. Since the place where you are using the properties is far awayfrom where the properties are being defined there is a bigger chance that you will change properties without noticing you are affecting multiple objects at once, or not realizing which elements are being affected by the changes. This is called “tight coupling”:

Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages:

  • A change in one module usually forces a ripple effect of changes in other modules.
  • Assembly of modules might require more effort and/or time due to the increased inter-module dependency.
  • A particular module might be harder to reuse and/or test because dependent modules must be included.

(source: Wikipedia)

I prefer to group all my selectors by proximity, that way I make sure that when someone update a selector/property they know exactly what is going to be affected by these changes, even if that imply some code duplication.

Avoid editing base classes as much as possible, follow the “open/closed principle” as much as you can. (Augment base classes, do not edit them).

Nesting

Another feature that a lot of people consider useful is selector nesting, so instead of repeating the selectors many times you simply nest the rules that should be applied to child elements.

#content {

    table.hl {
        margin: 2em 0;

        td.ln {
            text-align: right;
        }

    }

}

Compiles to:

#content table.hl {
    margin: 2em 0;
}

#content table.hl td.ln {
    text-align: right;
}

By abstracting the selectors it becomes very easy to be over specific and specificity is hard to handle and a bad thing for maintainability. I’ve been following the OOCSSapproach and I don’t need child selectors that much so I don’t think that typing the same selector multiple times is a real problem (specially with good code completion), I know a lot of people don’t agree with that approach but for the kind of stuff I’m doing it’s been working pretty well.

Call me a weirdo but I also find nested code harder to read – since I’ve been coding non-nested CSS for more than 7 years.

Sum up

These tools have some cool features like the helper functions for color manipulation, variables, math helpers, logical operators, etc, but I honestly don’t think it would improve my workflow that much.

My feeling for these pre-processors is the same feeling I have for CoffeeScript, nice syntax and features but too much overhead for no “real” gain. Syntax isn’t the real problem in JavaScript for me the same way that it isn’t the real problem in CSS (and most of the languages). You still need to understand how the box-model works, specificity, cascading, selectors, floats, browser quirks, etc… you are just adding another layer of abstraction between you and the interpreted stylesheet, adding yet another barrier for future developers and increasing the chance of over-engineering. Markup may become simpler (with less classes/ids) but it comes with many drawbacks.

For me the greatest problem are developers that code CSS without the knowledge required to build a maintainable and scalable structure. A stylesheet full of mixins, if/else, loops, variables, functions, etc, will be as hard to maintain as a bloated hand-crafted stylesheet, if not harder. Developers have an inherited desire to be “clever” and that is usually a red flag.

“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” – Brian Kernighan

Mixins are popular nowadays because of browser vendor prefixes, the real problem isn’t that CSS doesn’t support mixins or variables natively but that we have to write an absurd amount of vendor prefixes for no real reason since most of the implementations are similar and most of the features are only “cosmetic”. The real issue isn’t the language syntax, but the way that browsers are adding new features and people using them before they are implemented broadly (without prefixes). – This could be handled by a pre-processor that only adds the vendor prefixes (without the need of mixins or a special language) like cssprefixerTry to find what is the real problem you are trying to solve and think about different solutions.

“It’s time to abolish all vendor prefixes. They’ve become solutions for which there is no problem, and they are actively harming web standards.” – Peter-Paul Koch

I’ve been following the OOCSS approach on most of my latest projects, and probably will keep doing it until I find a better approach. For the kind of stuff I’m coding it is more important to be able to code things fast and make updates during the development phase than to maintain/evolve the project over many months/years. I find it very unlikely to make drastic design changes without updating the markup, on the last 100 projects I coded it probably only happened 2 or 3 times. – css zen garden is a cool concept but not really that practical – Features like desaturate(@red, 10%) are cool but usually designers already provides me a color palette to be used on the whole site and I don’t duplicate the same value that much, if I do duplicate it everywhere than I can simply do a “find and replace” inside all the CSS files and call it a day, by using a function that generates a color (which you have no idea which value it will be) you can’t simply do a find and replace since you don’t know what is the value you are looking for on the source code – I prefer to simply use a color picker…

I know my experience is very different from most people so that’s why my approach is also different, your mileage may vary… If I ever need to use any of these tools it won’t be an issue (I have no strong barrier against them), I just don’t think they will save me that much time right now that would outweigh the drawbacks. Pick the tools based on the project and your workflow, it isn’t because I listed a couple issues that you should discard using a pre-processor, for many cases it would be an awesome way of generating stylesheets, just think about the drawbacks and be responsible.

“With great power comes great responsibility.” – Uncle Ben to Peter Parker

PS: I love CSS, for me it’s one of the most rewarding tasks on a website development, it’s like solving a hard puzzle…

Source – millermedeiros.com