dimanche 28 juin 2015

Python script to reboot server N times

I'm trying to stress test several servers that I can ssh into. I'm trying to write a python script that causes a reboot loop for N times. I call

os.system('reboot') 

But, I'm not sure how to have the script continue execution once the server has finished booting to continue execution. The servers do run various distros of Linux. Any help would be great.

The use of need_resched flag and schedule() routine within Linux kernel [2.4]

As per my understanding when the kernel finds out that the current running process should be striped of the CPU, it enables the need_resched flag. The flag is checked before returning to user space, and if the flag enabled the kernel initiates call for schedule(). However I have noticed that within sys_sched_yield() routine we don't use the need_resched flag but explicitly call the schedule() routine. Why?

Easyphp works, linux server return Call to a member function on a non-object

I am using Easyphp for development with default settings. All my code works fine with connection on local Easyphp db. But when I upload it on my webhosting which is using linux server - all functions with objects suddenly are returing OOP errors. This is exp. of normal function I normaly write:

function user_data($user_id){
    global $db;
    $data = array();
    $user_id = (int)$user_id;

    $func_num_args = func_num_args();
    $func_get_args = func_get_args();

    if($func_num_args > 1) {
        unset($func_get_args[0]);
        $fields = '`' . implode('`, `', $func_get_args) . '`';
        $query = "SELECT $fields FROM users WHERE user_id = '$user_id'";
        $result = $db->query($query);
        $data = $result->fetch_assoc(); 
        return $data;
        $result->free();
        $data->free();
    }
}

Everything works fine on my local php server. After I upload it on webhosting with linux server I get " Call to a member function fetch_assoc() on a non-object error " on lines like :

 $data = $result->fetch_assoc();

Can anybody tell me what I am writing wrong?

DB connection for easyphp:

 $db = new mysqli("$DB_HOST","$DB_USER","$DB_PASSWORD","$DB_NAME");

DB connection for linux server:

 $socket = "/tmp/mysql51.sock";
 $db = new mysqli("$DB_HOST","$DB_USER","$DB_PASSWORD","$DB_NAME", 0, $socket);

Thank you

EDIT - SOLUTION

I found the reason for error. It is in writing variables in SQL query inside PHP code. Query should be like:

$query = "SELECT $fields FROM users WHERE user_id = '".$user_id."'";

Reason of error is still for me mistery. it can be platform or PHP versions.

UNIX: Unexpected End of file

I have a problem where the editor reports no problems, but when I run it in a terminal, it reports: Unexpected end of file.

What I'm trying to do is get it so that when this script is run, it changes the permissions so that everyone can execute the file.

I'm scripting the file on a Windows Machine:

#!/bin/bash

file=~\scripts\chmxtextfile.txt

if [[  -e "$file"  ]];   
then {chmod ugo+x $file} 
elif [[  ! -e "$file"  ]];
then echo "File doesn't exist"
fi

Beaglebone Black; Wrong SPI Frequency

I'm new at programming the Beaglebone Black and to Linux in general, so I'm trying to figure out what's happening when I'm setting up a SPI-connection. I'm running Linux beaglebone 3.8.13-bone47.

I have set up a SPI-connection, using a Device Tree Overlay, and I'm now running spidev_test.c to test the connection. For the application I'm making, I need a quite specific frequency. So when I run spidev_test and measure the frequency of the bits shiftet out, I don't get the expected frequency.

enter image description here

I'm sending a SPI-packet containing 0xAA, and in spidev_test I've modified the "spi_ioc_transfer.speed_hz" to 4000000 (4MHz). But I'm measuring a data transfer frequency of 2,98MHz. I'm seeing the same result with other speeds as well, deviations are usually around 25-33%.

How come the measured speed doesn't match the assigned speed? How is the speed assigned in "speed_hz" defined? How precise should I expect the frequency to be?

Thank you :)

How to debug a simple program using gdb inside a complex command line?

Let's say I have a program called _program. I run it inside a larger stdin/stdout command line like that:

echo Hello world | _program >> output.txt

How to debug _program using gdb?

Why does this command kill my shell?

Update: This is a more general command that is more reproducible. ShellFish identified that there is a more general pattern:

non-existingcommand & existingcommand &

for example,

xyz & echo &

Also, I had a coworker try over an ssh connection and his connection was closed after running the command. So this doesn't appear to be limited to a certain terminal emulator.

Original question:

echo?a=1&b=2|3&c=4=

Behavior:

After executing the command, my current Gnome Terminal tab closes without warning.

Background:

We were testing a URL with a curl command but forgot to quote it or escape the special characters (hence the ampersands and equals signs). Expecting some nonsense about syntax issues or commands not found, we instead watched our shell simply quit. We spent some time narrowing the command down to the minimum that would cause the behavior.

We are using Gnome Terminal on Ubuntu 14.10. Strangely, the behavior is not present on another box I have running byobu even if I detach from the session. It also doesn't happen on Cygwin. Unfortunately I'm limited to testing with Ubuntu 14.10 otherwise.

Note: The following command also kills my terminal but only about half of the time:

echo?a=1&b=2&c=3=

Additional tests:

Someone recommend using a subshell...

guest-cvow8T@chortles:~$ bash -c 'echo?a=1&b=2|4&c=3='
bash: echo?a=1: command not found
guest-cvow8T@chortles:~$ bash: 4: command not found

No exit.

location direct domain.com to www.domain.com on multi site server

thank you for viewing this. Please advise that my server is running LEMP on Debian Jessie.

I am trying to force "www." to occur. I was able to do this with the following solution found on Stackoverflow.

return       301 http://ift.tt/1mDYAZp;

However, I am running several websites from the server and the issue I am running into is as follows.

When logging into wordpress http://ift.tt/1KiJWke on one website, it routes me to http://ift.tt/1LOLvFH. I believe this issue has to do with a default_server or hostname issue. Do I need to add all hostnames to /etc/hostname ?

Here is my host file

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # SSL configuration
        #
        listen 443 ssl default_server;
        listen [::]:443 ssl default_server;

        root /var/www/site0;

        index index.php index.html index.htm;

        server_name site0.com;
        #return 301 http://ift.tt/1KiJWki;
        location / {
                try_files $uri $uri/ =404;
                #try_files $uri $uri/ /index.html;
        }

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        location ~ \.php$ {
        #       include snippets/fastcgi-php.conf;
        #
        #       # With php5-cgi alone:
        #       fastcgi_pass 127.0.0.1:9000;
        #       # With php5-fpm:
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                #fastcgi_index index.php
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #       deny all;
        #}
}

server {
        listen 80;
        listen [::]:80;

        # SSL configuration
        #listen 443 ssl;
        #listen [::]:443 ssl;

        root /var/www/site1;

        index index.php index.html index.htm;

        server_name site1.com;
        #return 301 http://ift.tt/1KiJYZo;
        location / {
                try_files $uri $uri/ =404;
                #try_files $uri $uri/ /index.html;
        }

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        location ~ \.php$ {
        #       include snippets/fastcgi-php.conf;
        #
        #       # With php5-cgi alone:
        #       fastcgi_pass 127.0.0.1:9000;
        #       # With php5-fpm:
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                #fastcgi_index index.php
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #       deny all;
        #}
}

I am thinking part of the issue has to do with: listen 80 default_server; listen [::]:80 default_server;

Please help if you can, it will be very much appreciate.

character issue utf-8 lamp centos

after installation of LAMP on centos, every work fine but show a ptoblem with character, every letters such as "is" "u" "ò" and other classic characters are replaced by "?", frequently recurring problem with UTF-8.

so i started this setting :

**http.conf** : AddDefaultCharset utf-8


in **php.ini** : default_charset = "utf-8"
mbstring.internal_encoding=utf-8
mbstring.http_output=UTF-8
mbstring.encoding_translation=On
mbstring.func_overload=6

**my.conf**   character_set_server=utf8    

**in html php file**: <meta http-equiv="Content-Type" content="text/html; charset=utf-8">

and finally on /etc/sysconfig/i18n with this code :

default setting:

LANG="en_US.UTF-8"
SYSFONT="latarcyrheb-sun16"

my setting:

LANG="it_IT.UTF-8"
SYSFONT="latarcyrheb-sun16"

italian is my language , after this also show the command linux ""locale ""

LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=

if you show not change the language , why ??

Why does my assembly program gives segfault?

I have the following piece of code that I have to debug:

global _start
_start:
pop esp
js 0x36
xor [eax+edi*2+0x43],ebx
xor [eax+edi*2+0x35],bl
xor [eax+edi*2+0x36],bl
cmp [eax+edi*2+0x37],bl
ss pop esp
js 0x49
aaa
pop esp
js 0x52
xor al,0x5c
js 0x56
xor al,0x5c
js 0x59
cmp [eax+edi*2+0x37],bl
xor ebx,[eax+edi*2+0x32]

After compiling and running that code I obtain a segmentation fault error, it seems that something goes wrong after the 5th line. My linux asm knowledge is very basic, any hints or ideas about what is exactly going wrong and how to fix it?

how to configure Qt for android

I am using Debian Linux and I want to configure Qt for Android application. I downloaded Qt from the Qt site and configured JDK, SDK and NDK and gave the path of SDK and NDK path to Qt option, then I restarted Qt creator. But in Debuggers option Qt says:

/home/user/Qtrequirement/android-ndk-r10e/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gdb not exist.

I searched in Google and I found out another NDK at Qt official that has a gdb file but not have every file for NDK. Qt says this is not top level NDK folder.
I googled everything and installed every dependency package that Qt uses.

Can anyone give me any solution that makes apk file and run in any Android device?
FYI any other application that does not use Android will run successfully, but I can not make an apk file using Qt.
Does anyone know the answer for this problem for Debian Linux?

How to grep just strings "ip" into a file?

I am trying to find every way to find a string in some text. I want to find more ways using grep or sed. (bear in mind It's case sensitive)

  • Every word (strings) containing string "ip" and redirect the output result in /root/found;
 grep ip /usr/share/dict/words/ > /root/found
  • Just words (strings) initiating with "ip" and redirect the output result in /root/found;
 grep ^ip  /ust/share/dict/words > /root/found 
  • Just the word "ip" and redirect the output result in /root/found;
grep ^ip$ /ust/share/dict/words > /root/found

does gSoapWinhttp plugin with gSoap supports linux

I'm using gSoap proxy services for ews and i need SSL and Auth. support for the endpoint. While my prefer platform is Linux. From gSoap documentation http://ift.tt/1QXhMko found that it has all SSL and Auth. support in C for linux but i'm using C++.

So when i search on google for SSL and Auth. supports with gSoap in C++, i found http://ift.tt/1QXhKcp link for gsoapwinhttp.

gSoap documentation says that it support most of the platform like windows, linux, macOS etc. and gsoapwinhttp use windows library, so i'm bit of confusing does this gsoapwinhttp plugin with gsoap supports linux platform?

Any help appreciated. thank you.

Getting the destination address of UDP packet

I have been using the following example posted in this same site. This is my version of it. (Please excuse my lack of experience with C socket programming:)

In constructor:

int sock = udpsocket_.native();
// sock is bound AF_INET socket, usually SOCK_DGRAM
// include struct in_pktinfo in the message "ancilliary" control data
fd_set fdset;
FD_ZERO(&fdset);
FD_SET(sock, &fdset);
int opt = 1;
setsockopt(sock, IPPROTO_IP, IP_PKTINFO, &opt, sizeof(opt));

Where "udpsocket_" is actually a boost asio udp socket. This is very convenient since on one hand I can have a function which gets the destination IP from the incoming UDP message without the need for using a raw socket:

int sock = udpsocket_.native();
char cmbuf[0x100];
struct sockaddr_in peeraddr;
struct msghdr mh;
mh.msg_name = &peeraddr;
mh.msg_namelen = sizeof(peeraddr);
mh.msg_control = cmbuf;
mh.msg_controllen = sizeof(cmbuf);
int received = recvmsg(sock, &mh, 0);
for ( // iterate through all the control headers
        struct cmsghdr *cmsg = CMSG_FIRSTHDR(&mh);
        cmsg != NULL;
        cmsg = CMSG_NXTHDR(&mh, cmsg))
{
    if (cmsg->cmsg_level != IPPROTO_IP ||
            cmsg->cmsg_type != IP_PKTINFO)
    {
        continue;
    }
    struct in_pktinfo *pi = (struct in_pktinfo*) CMSG_DATA(cmsg);
    char* destAddr = (char*) calloc(4, sizeof(char));
    destAddr = inet_ntoa(pi->ipi_spec_dst);

    stored_UDP_dest_ip_ = ip::address::from_string(destAddr);
}

Now here come the problems:

  • Could I call this "get_destination_IP" asynchronously, in a non-blocking way in the same way as I call "async_receive_from" ?

    • "recvmsg" stores the right destination IP info, but returns 0. In theory according to the man page, the "size_t numbytes" is returned there. Can I still read the datagram with "recvmsg"?
  • Is FD_ZERO necessary here?

  • Is FD_ZERO necessary at every call of the function?

Thank you beforehand for your help

Read only debugfs file is being written to. Why is that?

I have created a file using debugfs API inside /sys/kernel/debug/test/testFile. I have created the file with mode set to 444, so it is now read only.

Now I have essentially followed this tutorial in creating this debugfs file. And for this file both read and write are define. Moreover, concretely, I create file using this:

debugfs_create_file("testFile", 444, pDebugfs, NULL, &debugfs_fops)

File is successfully created and I can easily read from it by cat ... but why I can write to it too even though I can explicitly created as read only. I am logged in as root. Why is that? Should not it not allow me to write to it?

Finally, ls -l results:

-r--r--r-- 1 root root 0 جون 28 15:27 /sys/kernel/debug/test/testFile

Now, you may argue that since I am root, I can write to any file. Well, then if you search let's access blob file on the page, the OP cannot write to blob file, even though as root, because it is read only. Why is that?

Best Light server (Linux + Web server + Database) for Raspberry Pi [on hold]

I would like to install a web server with a database on a Raspberry Pi (little computer). The computer has only 1GB RAM.

I want to know what is the best combination: Linux distribution and web server and DBMS to run the local server with multiple users with minimal latency, I will use PHP on the server. And what are the best settings for good performance and to not have bugs (memory usage, disable plugin, disable service, etc)?

I thought a light Debian , a lighttpd server and SQLite for the database. Is this is a good solution?

Performance of system("rm x.*") vs unlink()?

I'm maintaining a Linux legacy system which maintains millions of small files in a large storage array. (yes, filesystem nightmare)

In the C++ code I found files being deleted like system("rm -f /dir/dir/file.*")

I suspect using unlink() is going to be a lot faster, but how much faster? (I can't really test it in production)

Anybody have some comparative data on this?

The old code is already fragile and replacing the handy system() calls with unlink() and getting the globbing to work, etc, is a good chunk of work...

g++ fails with "undefined reference" errors to the standard C++ library

I have built and installed g++ and tested it on a simple "Hello World" program and it appears to work.

However, for our larger code, the compile fails with errors such as:

CMakeFiles/gaim_convert.dir/GaimConvert.cpp.o: In function Output(std::string const&, std::ostream&)':GaimConvert.cpp:(.text._Z6OutputRKSsRSo[_Z6OutputRKSsRSo]+0x12): undefined reference tostd::basic_ostream >& st d::__ostream_insert >(std::basic_ostream >&, char const*, long)'

The program that works correctly is:

#include <iostream>

int main() {
  std::cout << "Hello World!" << std::endl;
  std::cin.get();
  return 0;
}

So clearly certain parts of the C++ standard library are installed correctly. This is not simply an "obvious" installation bug that does not have libstdc++ installed.

The code will compile with a different version of the compiler, so it's not the code.

What is a way to debug the installation so that this error message is removed? The library path is LD_LIBRARY_PATH:

/tec/mannucci/gccBuild/lib64:/tec/mannucci/gccBuild/lib:/usr/local/gmp510/lib:/usr/local/mpfr311/lib:/usr/local/mpc101/lib:/usr/local/ppl011/lib:/usr/local/cloog0162/lib:/usr/local/lib64:/usr/lib64:...

Thank you.

-Tony

What happens internally when deleting an opened file in linux

I came across this and this questions on deleting opened files in linux

However, I'm still confused what happened in the RAM when a process(call it A) deletes an opened file by another process B.

What baffles me is this(my analysis could be wrong, please correct me if so):

  • When a process opens a file, a new entry for that file in the UFDT is created.
  • When a process deletes a file, all the links to the file are gone especially, we have no reference to its inode, thus, it gets removed from the GFDT
  • However, when modifying the file(say writing to it) it must be updated in the disk(since its pages gets modified/dirty), but it got no reference in the GFDT because of the earlier delete, so we don't know the inode to it.

The Question is why the "deleted" file still accessible by the process which opened it? And how is that been done by the operating system?

Make "Add Qt Sources" work for Qt SDK on Linux (NOT built from source)

This is similar to my question about Step into Qt Sources from Qt Creator on Windows (NOT built from source), but I can't make it work for Linux.

Instead of building from source, I have downloaded the Qt SDK installer, and I've installed Qt to /opt/Qt, and I have the sources at /opt/Qt/5.4/Src.

I cannot step into Qt Sources, so I tried adding a Source Mapping using "Add Qt Sources":

enter image description here

I have tried mapping /var/tmp/qt-src to /opt/Qt/5.4/Src, /opt/Qt/5.4/Src/qtbase, and /opt/Qt/5.4/Src/qtbase/src, none of which worked.

What am I doing wrong? Is the source mapping not /var/tmp/qt-src, or is the target mapping wrong? Does "Add Qt Sources" work at all for the Qt SDK?

I saw a suggestion in a forum thread that it's because the Qt SDK for Linux ships only stripped binaries, while it ships both debug and release DLLs for Windows (which would explain why it worked for Windows, but not for Linux).

Find the process information with top command in java

I want to get the process information using the top command ,i have the below code but it is not working ,it just exists the program without any output .My objective is to get the processname,processid,memoryusage but that's the later part,I am stuck in getting process information using top command in java using grep.

public void getProcessInfo(String processName){

    Runtime rt = Runtime.getRuntime();
    try{
        String[] cmd = { "/bin/sh", "-c", "top | grep "+processName };

        Process proc = rt.exec(cmd);

        InputStream stdin = proc.getInputStream();
        InputStreamReader isr = new InputStreamReader(stdin);
        BufferedReader br = new BufferedReader(isr);

        String line = null;


        while ( (line = br.readLine()) != null){

            System.out.println(line);

        }



    }catch(Exception e){
        e.printStackTrace();
    }
}

}

format and filter file to Csv table

I have a file that contains many logs :

Ps: the question is inspired from a previous question here. but slightly improved.

at 10:00 carl 1 STR0 STR1 STR2 STR3 <STR4 STR5> [STR6 STR7] STR8:
academy/course1:oftheory:SMTGHO:nothing:
academy/course1:ofapplicaton:SMTGHP:onehour:

at 10:00 carl 2 STR0 STR1 STR2 STR3 <STR4 STR78> [STR6 STR111] STR8:
academy/course2:oftheory:SMTGHM:math:
academy/course2:ofapplicaton:SMTGHN:twohour:

at 10:00 david 1 STR0 STR1 STR2 STR3 <STR4 STR758> [STR6 STR155] STR8:
academy/course3:oftheory:SMTGHK:geo:
academy/course3:ofapplicaton:SMTGHL:halfhour:

at 10:00 david 2 STR0 STR1 STR2 STR3 <STR4 STR87> [STR6 STR74] STR8:
academy/course4:oftheory:SMTGH:SMTGHI:history:
academy/course4:ofapplicaton:SMTGHJ:nothing:

at 14:00 carl 1 STR0 STR1 STR2 STR3 <STR4 STR11> [STR6 STR784] STR8:
academy/course5:oftheory:SMTGHG:nothing:
academy/course5:ofapplicaton:SMTGHH:twohours:

at 14:00 carl 2 STR0 STR1 STR2 STR3 <STR4 STR86> [STR6 STR85] STR8:
academy/course6:oftheory:SMTGHE:music:
academy/course6:ofapplicaton:SMTGHF:twohours:

at 14:00 david 1 STR0 STR1 STR2 STR3 <STR4 STR96> [STR6 STR01] STR8:
academy/course7:oftheory:SMTGHC:programmation:
academy/course7:ofapplicaton:SMTGHD:onehours:

at 14:00 david 2 STR0 STR1 STR2 STR3 <STR4 STR335> [STR6 STR66] STR8:
academy/course8:oftheory:SMTGHA:philosophy:
academy/course8:ofapplicaton:SMTGHB:nothing:

I have tried to apply the code below but in vain :

BEGIN {
    # set records separated by empty lines
    RS=""
    # set fields separated by newline, each record has 3 fields
    FS="\n"
}
{
    # remove undesired parts of every first line of a record
    sub("at ", "", $1)
    # now store the rest in time and course
    time=$1
    course=$1
    # remove time from string to extract the course title
    sub("^[^ ]* ", "", course)
    # remove course title to retrieve time from string
    sub(course, "", time)
    # get theory info from second line per record
    sub("course:theory:", "", $2)
    # get application info from third line
    sub("course:applicaton:", "", $3)
    # if new course
    if (! (course in header)) {
        # save header information (first words of each line in output)
        header[course] = course
        theory[course] = "theory"
        app[course] = "application"
    }
    # append the relevant info to the output strings
    header[course] = header[course] "," time
    theory[course] = theory[course] "," $2
    app[course] = app[course] "," $3

}
END {
    # now for each course found
    for (key in header) {
        # print the strings constructed
        print header[key]
        print theory[key]
        print app[key]
        print ""
}

Is there anyway to get a ride of these strings STR* and SMTGH* in order to get this output:

carl 1,10:00,14:00
applicaton,halfhour,onehours
theory,geo,programmation

carl 2,10:00,14:00
applicaton,nothing,nothing
theory,history,philosophy

david 1,10:00,14:00
applicaton,onehour,twohours
theory,nothing,nothing

david 2,10:00,14:00
applicaton,twohour,twohours
theory,math,music

IA32 IDT and linux interrupt handler

In the IDT each line has some bits called "DPL" - Descriptor Privileg Level, 0 for kernel and 3 for normal users(maybe there are more levels). I don't understand 2 things:

  1. this is the level required to run the interrupt handler code? or to the trigger the event that leads to it?. because system_call has DPL=3, so in user-mode we can do "int 0x80". but in linux only the kernel handle interrupts, so we can trigger the event but not handle it? even though we have the right CPL.

  2. In linux only the kernel handle interrupts, but when an interrupt(or trap) happens, what get us into the kernel mode?

Sorry for any mistakes, I am new to all this stuff and just trying to learn.

Reading camera input from /dev/video0 in python or c

I want to read from the file /dev/video0 either through c or python,and store the incoming bytes in a different file. Here is my c code:

#include<stdio.h>
#include<sys/types.h>
#include<sys/stat.h>
#include<fcntl.h>
int main()
{
    int fd,wfd;
    fd=open("/dev/video0",O_RDONLY);
    wfd=open("image",O_RDWR|O_CREAT|O_APPEND,S_IRWXU);
    if(fd==-1)
        perror("open");
    while(1)
    {
        char buffer[50];
        int rd;
        rd=read(fd,buffer,50);
        write(wfd,buffer,rd);
    }

    return 0;
}

When i run this code and after some time terminate the program nothing happens except a file name "image" is generated which is usual.

This is my python code:

    image=open("/dev/video0","rb")
    image.read()

and this is my error when my run this snippet:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IOError: [Errno 22] Invalid argument

I want to know how to do this using pure c or python code.Please no external library suggestions.

irssi apt-get not installing

I'm trying to install irssi via apt-get. When i enter sudo apt-get install irssi i get the following error:

Err http://ift.tt/1gRm4rd kali/main libperl5.14 armel 5.14.2-21+deb7u1
  404  Not Found
Get:1 http://ift.tt/1gRm4rd kali/main irssi armel 0.8.15-5 [1,053 kB]
Fetched 1,053 kB in 7s (141 kB/s)                                              
Failed to fetch http://ift.tt/1KipaRQ  404  Not Found
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

when i type irssi in the terminal screen i get -bash: irssi: command not found. How do i correctly install irssi? thanks. when i try sudo apt-get update i get the following error:

W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ift.tt/1nrLYFu kali/updates Release: The following signatures were invalid: KEYEXPIRED 1425567400 KEYEXPIRED 1425567400 KEYEXPIRED 1425567400

W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://http.kali.org kali Release: The following signatures were invalid: KEYEXPIRED 1425567400 KEYEXPIRED 1425567400 KEYEXPIRED 1425567400

W: Failed to fetch http://ift.tt/1rwn4Xr  

W: Failed to fetch http://ift.tt/1rwn6OV  

W: Some index files failed to download. They have been ignored, or old ones used instead.

I'm using kali linux as my os

Decompile ioncube obfuscated PHP code

My test file is:

<?php
echo "Hello";
?>

Now i Encoded with ioncube, it becomes obfuscated.

ioncube.sh -55 test.php -o test_e.php

Whenerver i run php test_e.php > out it print Hello in out file. so it executed somewhere in system and decompile php code in system. is there any way to obtain that decompiled code? I search lots of thing in google like xdebug, parsing, trace, but not working to obtain decompiled php.

Is there a way to rename bulk of files in linux

Is there a command in Shell/Bash or Perl that can rename all the files in a folder.

I am looking for here is in my folder documents with the following naming convention:

smith_welding_<XXXXXX>.jpg

Where XXXXXX is the counter and it is started from 001191 to 001254.jpg

I would like to rename all the files with the above given convention and the counter needs to be started from 000000:

smith_welding_<XXXXXX>.jpg

Is there any command that can help me with the above?

C++ thread attach/dettach segfaults

I use a plugin written in C++ for running queries on MySQL. It's used inside a Xojo (www.xojo.com) made application.

The problem is that if too many queries are executed too often it crashes on linux with a segmentation fault.

The plugin itself works by detaching from the calling thread before executing the query in order to not block the main application etc and then re-attaching once it's done. I think this re-attaching is the problem (gdb debugging in linux seems like this) but due to not having symbols on the Xojo's framework I'm not so sure.

This are the two methods/functions used for detaching and re-attaching

void ReattachCurrentThread(void *token)
{
    static void (*pAttachThread)(void*) = nullptr;
    if (!pAttachThread)
        pAttachThread = (void (*)(void *)) gResolver("_UnsafeAttachCurrentThread");
    if (pAttachThread) pAttachThread( token );
}

void * DetachCurrentThread(void)
{
    static void * (*pDetachThread)(void) = nullptr;
    if (!pDetachThread)
        pDetachThread = (void * (*)(void)) gResolver("_UnsafeDetachCurrentThread");
    if (pDetachThread) return pDetachThread();
    return nullptr;
}

And here is one place where those are called:

REALdbCursor MySQLPerformSelect(MySQLDatabaseData *db, REALstring queryStr)
{
    if (db->fConnection == nullptr) return nullptr;

    if (!LockDatabaseUsage( db )) return nullptr;

    REALstringData stringData;
    if (!REALGetStringData( queryStr, REALGetStringEncoding( queryStr ), &stringData )) return nullptr;

    void *detachToken = DetachCurrentThread();
    int err = mysql_real_query( db->fConnection, (const char *)stringData.data, stringData.length );
    ReattachCurrentThread( detachToken );
    db->CaptureLastError();

    REALDisposeStringData( &stringData );

    REALdbCursor retCursor = nullptr;
    if (0 == err) {
        // Allocate a cursor
        MySQLCursorData *curs = new MySQLCursorData;
        bzero( curs, sizeof( MySQLCursorData ) );

        curs->fCursor = new MySQLCursor( db );

        retCursor = NewDBCursor( curs );
    }

    UnlockDatabaseUsage( db );

    return retCursor;
}

My question is: is there anything wrong with the code above and is it expected to cause a segfault because it's not being careful somehow etc? I'm not a C++ programmer but it seems too blunt in my understanding, like not trying to see if thread is available first etc. Again, I'm not a C++ programmer so all I'm saying may be absurd etc...

The "whole" plugin's code is here: plugin's source

examples of user-space and kernel-space

I was asked :

  • of examples of user-space-threaded-only systems, kernel-space-threaded-only systems.
  • whether Native POSIX Thread Library is considered part of user-space or kernel-space.
  • and if Java threading is done in user-space.

there's a huge amount of information about all these topics, but there doesn't seem to be a direct answer to those specific questions. I hope you could help me.

How can I install fedora via LAN from an existing installed fedora system

I'm already having a system with Fedora 21 installed on it. I want to install the same OS with exact software packages on the other existing system that are interconnected via LAN. How do I do that? Is there any existing solution?

Currently other systems are running windows 7, and we're migrating to Fedora at our infrastructure.

How file system block size works?

All Linux file systems have 4kb block size. Let's say I have 10mb of hard disk storage. That means I have 2560 blocks available and let's say I copied 2560 files each having 1kb of size. Each 1 kb block will occupy 1 block though it is not filling entire block.

So my entire disk is now filled but still I have 2560x3kb of free space. If I want to store another file of say 1mb will the file system allow me to store? Will it write in the free space left in the individual blocks? Is there any concept addressing this problem?

I would appreciate some clarification. Thanks in advance.

join two files based

I have two files, I want to join them.

$cat t1
 1 1.2
 2 2.2
$cat t2
 1
 2
 1

I want to have the blow output

$cat joind.txt
 1 1.2
 2 2.2
 1 1.2

but when I use join command the third command not exist.

Thanks

Speed of Azure File Service

Reading the documentation regarding Azure File Service I expected an easy and fast way to mount more space to my virtual Azure machine. However, after mounting a file service on my ubuntu machine (done exactly as explained on this page) I get super slow read/write speeds to this mount.

To give an example, downloading a 1000mb.bin file to my local disk:

sander@sanderpihost:~$ wget http://ift.tt/1rgWNKC
--2015-06-28 08:34:38--  http://ift.tt/1rgWNKC
Resolving www.colocenter.nl (www.colocenter.nl)... 5.39.184.5
Connecting to www.colocenter.nl (www.colocenter.nl)|5.39.184.5|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M) [application/octet-stream]
Saving to: ‘1000mb.bin’

12% [==========>                                                                                ] 134,018,596 37.3MB/s  eta 25s  

And to my mounted folder:

sander@sanderpihost:~/myazuredisk/Incomplete$ wget http://ift.tt/1rgWNKC
--2015-06-28 08:31:39--  http://ift.tt/1rgWNKC
Resolving www.colocenter.nl (www.colocenter.nl)... 5.39.184.5
Connecting to www.colocenter.nl (www.colocenter.nl)|5.39.184.5|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576000 (1000M) [application/octet-stream]
Saving to: ‘1000mb.bin.1’

 0% [                                                                                           ] 3,768,320    466KB/s  eta 34m 8s 

Is this expected? Is there any reason why this is soo slow? I would expect to hit much higher speeds than this meagre ±500KB/s I seem to get right now.

CTRL-V mapped to paste instead block visual mode in Vim on Elementary OS (linux)

I have just started using Vim on a Linux distribution -- Elementary OS. In Vim, CTRL-V appears to be mapped to paste instead of taking me to block visual mode. How do I reverse this? I'm pretty sure I didn't configure vim to behave this way and from what I've read so far this should only happen in MS Windows.

Channels missing in compiled kernel in kvm

I have a ubuntu in virtual machine using libvirt. I configured my guest to create pipe and unix channels for trace-virtio purpose:

<channel type='unix'>
  <source mode='connect' path='path/to/socket'/>
  <target type='virtio' name='unix-name'/>
  <address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='pipe'>
  <source path='path/to/pipe'/>
  <target type='virtio' name='pipe-name'/>
  <address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>

Everything works fine in my guest and I have my channel in /dev inside my virtual machine and I can send data to host using these channels. But when I compile kernel and install it inside my vm and select it from grub, these channels disappear. When I reboot and select the original kernel, it has the channels and works correctly. How can I tell kvm to create channels in compiled kernel inside my virtual machine?

how to use xrandr from script?

I am trying to use bash script to add resolution through xrandr and i keep getting error, here is my script:

#!/bin/bash

out=`cvt 1500 800`
out=`echo $out | sed 's/\(.*\)MHz\(.*\)/\2/g'`
input=`echo $out | sed 's/Modeline//g'`
#echo $input
xrandr --newmode $input
input2=`echo $out | cut -d\" -f2`
#echo $input2
xrandr --addmode VNC-0 $input2

when trying to do this manually it works fine. i keep getting from the script cannot find mode... , but when i do xrandr, i do see the new mode

sending signal from parent to child

I am using this tutorial from website http://ift.tt/1fXCwYE and trying to understand why signal is not recieved by child?

here is the code:

 #include <stdio.h>  
 #include <signal.h>
 #include<stdlib.h>  
 void sighup(); /* routines child will call upon sigtrap */  
 void sigint();  
 void sigquit();  
 void main()  
 { int pid;  
  /* get child process */  
   if ((pid = fork()) < 0) {  
     perror("fork");  
     exit(1);  
   }  
   if (pid == 0)  
    { /* child */  
     signal(SIGHUP,sighup); /* set function calls */  
     signal(SIGINT,sigint);  
     signal(SIGQUIT, sigquit);  
     for(;;); /* loop for ever */  
    }  
  else /* parent */  
    { /* pid hold id of child */  
     printf("\nPARENT: sending SIGHUP\n\n");  
     kill(pid,SIGHUP);  
     sleep(3); /* pause for 3 secs */  
     printf("\nPARENT: sending SIGINT\n\n");  
     kill(pid,SIGINT);  
     sleep(3); /* pause for 3 secs */  
     printf("\nPARENT: sending SIGQUIT\n\n");  
     kill(pid,SIGQUIT);  
     sleep(3);  
    }  
 }  
 void sighup()  
 { signal(SIGHUP,sighup); /* reset signal */  
   printf("CHILD: I have received a SIGHUP\n");  
 }  
 void sigint()  
 { signal(SIGINT,sigint); /* reset signal */  
   printf("CHILD: I have received a SIGINT\n");  
 }  
 void sigquit()  
 { printf("My DADDY has Killed me!!!\n");  
  exit(0);  
 }  

samedi 27 juin 2015

PulledPork can't locate Snort binary

I have a problem there. I've installed Snort on my CentOS 7 server and wanted to use PulledPork as a source for rules. Pretty basic stuff...

Configured PulledPork conf:

# What path you want the .so files to actually go to *i.e. where is it
# defined in your snort.conf, needs a trailing slash
sorule_path=/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/

# Path to the snort binary, we need this to generate the stub files
snort_path=/usr/sbin/snort/

# We need to know where your snort.conf file lives so that we can
# generate the stub files
config_path=/etc/snort/snort.conf

Then I ran my PulledPork script:

./pulledpork.pl -c /etc/pulledpork/etc/pulledpork.conf

It gave me an error:

The specified Snort binary does not exist!
Please correct the value or specify the FULL rules tarball name in the pulledpork.conf!
 at ./pulledpork.pl line 1816.

I tried to install different snort (from the snort binaries section:snort-openappid-2.9.7.3-1.centos7.x86_64.rpm), changed pullerpork conf file. Nothing changed. Couldn't google it as well, so now I am here seeking for help. Thank you!

Here are my snort files locations:

/home/aivanov/snort-2.9.7.3-1.centos7.x86_64.rpm
/home/aivanov/snort-openappid-2.9.7.3-1.centos7.x86_64.rpm
/home/aivanov/snort-2.9.7.3-1.src.rpm
/home/aivanov/snort-openappid-2.9.7.3-1.centos7.x86_64.rpm.1
/run/lock/subsys/snort
/sys/fs/cgroup/systemd/system.slice/snortd.service
/sys/fs/cgroup/systemd/system.slice/snortd.service/cgroup.clone_children
/sys/fs/cgroup/systemd/system.slice/snortd.service/cgroup.event_control
/sys/fs/cgroup/systemd/system.slice/snortd.service/notify_on_release
/sys/fs/cgroup/systemd/system.slice/snortd.service/cgroup.procs
/sys/fs/cgroup/systemd/system.slice/snortd.service/tasks
/etc/selinux/targeted/modules/active/modules/snort.pp
/etc/logrotate.d/snort
/etc/sysconfig/snort
/etc/rc.d/init.d/snortd.rpmsave
/etc/rc.d/init.d/snortd
/etc/rc.d/rc0.d/K60snortd
/etc/rc.d/rc1.d/K60snortd
/etc/rc.d/rc2.d/S40snortd
/etc/rc.d/rc3.d/S40snortd
/etc/rc.d/rc4.d/S40snortd
/etc/rc.d/rc5.d/S40snortd
/etc/rc.d/rc6.d/K60snortd
/etc/snort
/etc/snort/rules
/etc/snort/rules/snort-2.9.7.3-1.src.rpm
/etc/snort/rules/snort-2.9.7.3-1.centos7.x86_64.rpm
/etc/snort/rules/snort-openappid-2.9.7.3-1.centos7.x86_64.rpm
/etc/snort/snort.conf.rpmsave
/etc/snort/classification.config
/etc/snort/gen-msg.map
/etc/snort/reference.config
/etc/snort/snort.conf
/etc/snort/threshold.conf
/etc/snort/unicode.map
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/from_repo
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/reason
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/releasever
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/var_uuid
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/var_infra
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/command_line
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/checksum_type
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/checksum_data
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/from_repo_revision
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/from_repo_timestamp
/var/lib/yum/yumdb/s/bbf08ea2dbaff9bcfb7095d8dfcf486e694aa1cf-snort-openappid-2.9.7.3-1-x86_64/installed_by
/var/log/snort
/var/spool/mail/snort
/var/tmp/yum-root-3bDmpR/snort-2.9.7.3-1.centos7.x86_64.rpm
/usr/bin/snort_control
/usr/sbin/snort
/usr/sbin/snort-openappid
/usr/lib64/snort-2.9.7.3_dynamicengine
/usr/lib64/snort-2.9.7.3_dynamicengine/libsf_engine.so
/usr/lib64/snort-2.9.7.3_dynamicengine/libsf_engine.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_appid_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_appid_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_appid_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dce2_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dce2_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dce2_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dnp3_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dnp3_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dnp3_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dns_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dns_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_dns_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ftptelnet_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ftptelnet_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ssl_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ftptelnet_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_gtp_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_gtp_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_gtp_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_imap_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_imap_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_imap_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_modbus_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_modbus_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_modbus_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_pop_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_pop_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_pop_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_reputation_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ssl_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_reputation_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ssl_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_reputation_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_sdf_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_sdf_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_sdf_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_sip_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_sip_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_sip_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_smtp_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_smtp_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_smtp_preproc.so.0.0.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ssh_preproc.so
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ssh_preproc.so.0
/usr/lib64/snort-2.9.7.3_dynamicpreprocessor/libsf_ssh_preproc.so.0.0.0
/usr/share/doc/snort-2.9.7.3
/usr/share/doc/snort-2.9.7.3/AUTHORS
/usr/share/doc/snort-2.9.7.3/BUGS
/usr/share/doc/snort-2.9.7.3/CREDITS
/usr/share/doc/snort-2.9.7.3/INSTALL
/usr/share/doc/snort-2.9.7.3/NEWS
/usr/share/doc/snort-2.9.7.3/README.unified2
/usr/share/doc/snort-2.9.7.3/OpenDetectorDeveloperGuide.pdf
/usr/share/doc/snort-2.9.7.3/PROBLEMS
/usr/share/doc/snort-2.9.7.3/README
/usr/share/doc/snort-2.9.7.3/README.GTP
/usr/share/doc/snort-2.9.7.3/WISHLIST
/usr/share/doc/snort-2.9.7.3/README.PLUGINS
/usr/share/doc/snort-2.9.7.3/generators
/usr/share/doc/snort-2.9.7.3/README.PerfProfiling
/usr/share/doc/snort-2.9.7.3/README.SMTP
/usr/share/doc/snort-2.9.7.3/snort_manual.tex
/usr/share/doc/snort-2.9.7.3/README.UNSOCK
/usr/share/doc/snort-2.9.7.3/README.WIN32
/usr/share/doc/snort-2.9.7.3/snort_manual.pdf
/usr/share/doc/snort-2.9.7.3/README.active
/usr/share/doc/snort-2.9.7.3/README.alert_order
/usr/share/doc/snort-2.9.7.3/README.appid
/usr/share/doc/snort-2.9.7.3/README.asn1
/usr/share/doc/snort-2.9.7.3/README.counts
/usr/share/doc/snort-2.9.7.3/README.csv
/usr/share/doc/snort-2.9.7.3/README.daq
/usr/share/doc/snort-2.9.7.3/README.dcerpc2
/usr/share/doc/snort-2.9.7.3/README.decode
/usr/share/doc/snort-2.9.7.3/README.variables
/usr/share/doc/snort-2.9.7.3/README.decoder_preproc_rules
/usr/share/doc/snort-2.9.7.3/README.dnp3
/usr/share/doc/snort-2.9.7.3/README.dns
/usr/share/doc/snort-2.9.7.3/README.event_queue
/usr/share/doc/snort-2.9.7.3/README.file
/usr/share/doc/snort-2.9.7.3/README.file_ips
/usr/share/doc/snort-2.9.7.3/README.filters
/usr/share/doc/snort-2.9.7.3/README.flowbits
/usr/share/doc/snort-2.9.7.3/README.frag3
/usr/share/doc/snort-2.9.7.3/README.ftptelnet
/usr/share/doc/snort-2.9.7.3/README.gre
/usr/share/doc/snort-2.9.7.3/README.ha
/usr/share/doc/snort-2.9.7.3/README.http_inspect
/usr/share/doc/snort-2.9.7.3/README.imap
/usr/share/doc/snort-2.9.7.3/README.ipip
/usr/share/doc/snort-2.9.7.3/README.ipv6
/usr/share/doc/snort-2.9.7.3/README.modbus
/usr/share/doc/snort-2.9.7.3/TODO
/usr/share/doc/snort-2.9.7.3/README.multipleconfigs
/usr/share/doc/snort-2.9.7.3/README.normalize
/usr/share/doc/snort-2.9.7.3/README.pcap_readmode
/usr/share/doc/snort-2.9.7.3/README.pop
/usr/share/doc/snort-2.9.7.3/README.ppm
/usr/share/doc/snort-2.9.7.3/README.reload
/usr/share/doc/snort-2.9.7.3/README.reputation
/usr/share/doc/snort-2.9.7.3/USAGE
/usr/share/doc/snort-2.9.7.3/README.sensitive_data
/usr/share/doc/snort-2.9.7.3/README.sfportscan
/usr/share/doc/snort-2.9.7.3/README.sip
/usr/share/doc/snort-2.9.7.3/README.ssh
/usr/share/doc/snort-2.9.7.3/README.ssl
/usr/share/doc/snort-2.9.7.3/README.stream5
/usr/share/doc/snort-2.9.7.3/README.tag
/usr/share/doc/snort-2.9.7.3/README.thresholding
/usr/share/man/man8/snort.8.gz
/usr/local/lib/snort_dynamicrules

Thanks for your help!

How to prompt user to change password with a certain complexity in Ubuntu?

I have to change the passwords of about 100 computers connected in a network all of which are running either Linux Mint or Ubuntu 14.04. How would I approach this with the help of a script and a server pushing it to the computers?

Linux Kernel init fails in encrypted filesystem

I am trying to make a linux os with encrypted filesystem for the whole OS (boot,kernel,root,...)

I modified EXT4 filesystem's read and write functions. after running a lot of tests everything read and write work fine.

EDIT:

my change is a simple XOR to file contents.

my tests include reading/writing text files, tar archive creation/deletion, sound and videofile creation/copying/deletion and some stress tests.

next step was to boot a simple linux based OS on this encrypted filesystem, I modified GRUB 2 bootloader so it cat boot the kernel from encrypted disk. then I faced this problem:

  • grub can load linux kernel and kernel boots, but when it tries to run the init proccess I get kernel panic with the message: "init Not tained".

I can see from previous messages that filesystem is loaded by kernel and it is actually reading init file but refuses to run init.

my question is: is kernel reading init file in any other way than using standard read system call? is there something I am doing wrong here?

Any help would be greatly appreciated

EDIT:

now the question is:

how can I decrypt the data that kernel uses by mapping memory?

browse ftp file in firefox browser linux

I want to set users upload file to my server in browser. I use <input type="file" /> for upload file.

some of my users want to upload file from ftp server to my server. in windows this users can set ftp URL in address bar and select file to upload. but Linux users can't upload file from ftp with file browse.

so how can my Linux users upload files from ftp to my server with input HTML tag? or how can my Linux users access ftp from file browser? Not that my users use Firefox.

IPv6: Does DAD happens for IPs that do not belong to link-local address family too?

As per the RFC 4862, for IPv6, Duplicate Address Detection (DAD) happens for every self-assigned link-local IP? However, it is unclear if the term "link-local" there refers only to the "link-local family/type" of addresses, or "any" IP address that is present a link/one-hop away (in other words: in a LAN).

If a static IPv6 IP is assigned, to a node in LAN, that does not belong to the link-local type (but rather global type), then does DAD happen for such an IP too?

If this is implementation-dependent, what is the behavior in Linux?

Docker install continuously failing on Debian 7

I'm trying to install docker on Debian 7, but I am running into the following situation and it isn't covered in their tutorials. Any ideas?

me@computer[04:46:05] ~ $ wget -qO- https://get.docker.com/ | sh
libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/3.18.5-x86_64-linode52/modules.dep.bin'
Warning: current kernel is not supported by the linux-image-extra-virtual
 package.  We have no AUFS support.  Consider installing the packages
 linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.
+ sleep 10
+ sudo -E sh -c sleep 3; apt-get update

// bunch of apt-get output...

W: Failed to fetch http://ift.tt/1dpYMbS  404  Not Found

W: Failed to fetch http://ift.tt/1dpYMbU  404  Not Found

W: Failed to fetch http://ift.tt/1dpYJwy  404  Not Found

E: Some index files failed to download. They have been ignored, or old ones used instead.

awk full join 2 files on single column

I have 2 CSV files of which i want to join together using AWK.

file1.csv:

A1,B1,C1
"apple",1,2
"orange",2,3
"pear",5,4

file2.csv:

A2,D2,E2,F2
"apple",1,3,4
"peach",2,3,3
"pear",5,4,2
"mango",6,5,1

This is the output i want:

A1,B1,C1,A2,D2,E2,F2
"apple",1,2,"apple",1,3,4
"orange",2,3,NULL,NULL,NULL,NULL
"pear",5,4,"pear",5,4,2
NULL,NULL,NULL,"peach",2,3,3
NULL,NULL,NULL,"mango",6,5,1

I want to do a full join on file 1 and file2 where A1=A2. File2 has more rows than file1. For records that dont have matching column values, NULL values will be inserted instead.

What is a method to notify you when things "don't" happen?

I have lots of different scripts and quite a few cron jobs that trigger different things throughout the day. Many times it is to download data from an external API or to periodically run a script of some type.

However, I am at a loss in finding a simple method to notify me if these things don't happen. For example, recently, something happened on one of my servers that caused all the cron jobs to stop running. It took a few days before I started getting complaints that things weren't working right. What are some of the methods you use to make sure things happen on a regular basis?

Define a Linux manual page's TITLE text when using docbook2man?

I'm experimenting with Linux manual (man) page creation using DocBook, and specifically I'm using 'docbook2man' on a Fedora 20 box, and I've been unable to figure how to create the manual's title text.

For example, if I open the man-pages(7) manual page, the manual's title is MAN-PAGES(7) and the manual's title text is Linux Programmer's Manual.

For further clarification, man-pages(7) defines the TH command as

.TH title section date source manual

It's the manual element--e.g., Linux Programmer's Manual--that I'm trying to figure out how to create using docbook2man.

I've been experimenting with the example code found in section 4.6 "Generating a man page" on the Using DocBook website. The pertinent sections of that code example are provided below (see Listing 1). The file name I'm using for this example code is foo-ref.sgml. The command line I'm using is

docbook2man foo-ref.sgml


Listing 1. Example SGML manual page

<!DOCTYPE refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN">
<refentry>

<refentryinfo>
    <date>2001-01-01</date>
</refentryinfo>

<refmeta>
    <refentrytitle>
        <application>foo</application>
    </refentrytitle>
    <manvolnum>1</manvolnum>
    <refmiscinfo>foo 1.0</refmiscinfo>
</refmeta>

<refnamediv>
    <refname>
        <application>foo</application>
    </refname>
    <refpurpose>
    Does nothing useful.
    </refpurpose>
</refnamediv>

<refsynopsisdiv>
    <refsynopsisdivinfo>
        <date>2001-01-01</date>
    </refsynopsisdivinfo>
    <cmdsynopsis>
    <command>foo</command>
<arg><option>-f </option><replaceable class="parameter">bar</replaceable></arg>
<arg><option>-d<replaceable class="parameter">n</replaceable></option></arg>
<arg rep="repeat"><replaceable class="parameter">file</replaceable></arg>
    </cmdsynopsis>
</refsynopsisdiv>

<refsect1>
    <refsect1info>
        <date>2001-01-01</date>
    </refsect1info>
    <title>DESCRIPTION</title>
    <para>
    <command>foo</command> does nothing useful.
    </para>
</refsect1>
<!-- etc. -->
</refentry>


When I process this source code with docbook2man, a man page named 'foo.1' is generated whose .TH macro is rendered as shown below, but with an empty string "" for the manual's title text element:

.TH "FOO" "1" "2001-01-01" "foo 1.0" ""

I've dug around in the DocBook5 refentry refernece manual, trying various tags, but so far I haven't found anything that produces the title text. I've also searched the Interwebs for DocBook manual page examples, but none of the examples I've found produces the manual title text. So I'm starting to wonder if this is even doable with docbook2man?

Any suggestions?

Resizing a partition in Linux - Bad magic number in super-block error

I was trying to resize my partition with parted and resize2fs.

I tried the following:

#parted
Partition Table: msdos
Number  Start   End     Size    Type     File system     Flags
 1      2097kB  21.0GB  21.0GB  primary  ext4            boot
 2      21.0GB  500GB   479GB   primary  ext4
 3      500GB   500GB   536MB   primary  linux-swap(v1)

(parted) rm 2
(parted) mkpart
Partition type?  primary/extended? primary
File system type?  [ext2]? ext4
Start? 41GB
End? 500GB
(parted) q
Information: You may need to update /etc/fstab.

#resize2fs /dev/sda2
resize2fs 1.42.5 (29-Jul-2012)
resize2fs: Bad magic number in super-block while trying to open /dev/sda2
Couldn't find valid filesystem superblock.

Unfortunately I can't understand why this doesn't work. It was an ext4 partition. I would like to resize the partition without loss of data.

fdisk list before operation:
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        4096    40962047    20478976   83  Linux
/dev/sda2        40962048   975718399   467378176   83  Linux
/dev/sda3       975718400   976764927      523264   82  Linux swap / Solaris

Now fdisk shows:
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        4096    40962047    20478976   83  Linux
/dev/sda2        80078848   975718399   447819776   83  Linux
/dev/sda3       975718400   976764927      523264   82  Linux swap / Solaris

Block or allow websites with "/etc/hosts" is really deprecated? What to use instead?

I'm creating a Java project for my university and one of the features of my project is to block / allow websites that was set by the teacher (it's an open-source laboratory monitoring software). Actually I need something easier: block ALL sites but allow only a few (allow about 2 or 3 sites, all the others must be blocked).

I've found this excellent tutorial that uses /etc/hosts.allow and /etc/hosts.deny to do exactly what I need. However I discovered that these files and this method of blocking / allowing websites is deprecated.

I don't think IPTables is a good way to achieve my aim, because to allow the access to a single website I need allow an IP address - but remember that a single hostname can have several IP addresses (like any Google service, Facebook and even Moodle of my university).

So, what would be the best way to block all websites and allow only a few?

Appending a line just after the matched pattern in sed not working

My /etc/pam.d/system-auth-ac has the below auth parameters set:

auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        required      pam_deny.so

I want to insert pam_tally2.so just after pam_env.so. So I want it to be:

auth        required      pam_env.so
auth        required      pam_tally2.so onerr=fail audit silent deny=5 unlock_time=900
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        required      pam_deny.so

The script that I'm using is :

#! /bin/bash

grep "pam_tally2" /etc/pam.d/system-auth-ac &> /dev/null
if [ $? -ne 0 ];
then
   sed -i '/^[]*account[]*required[]*pam_unix.so/aauth\trequired\tpam_tally2.so onerr=fail audit silent deny=5 unlock_time=900' /etc/pam.d/system-auth-ac
else
   sed -i 's/.*pam_tally2.*/auth\trequired\tpam_tally2.so onerr=fail audit silent deny=5 unlock_time=900/1' /etc/pam.d/system-auth-ac
fi

But it gives this error:

sed: -e expression #1, char 116: unterminated address regex

What am I doing wrong ?

perl cgi "End of script output before headers"

Could anyone help me with this? This one bugs me for couple days...

environment:
running a simple perl cgi script on fedora 21,
Server version: Apache/2.4.10 (Fedora),
This is perl 5, version 18, subversion 4 (v5.18.4) built for x86_64-linux-thread-multi,
getenforce: Permissive

the cgi script:

#!/usr/bin/perl
print "Content-type: text/html\n\n";

use strict;
use warnings;
print "Hello, world!<br />\n";

foreach my $key (keys %ENV) {
    print "$key --> $ENV{$key}<br />";
}


problem:
the script won't run in 127.0.0.1/~username/subfolder/
I should knew how to setup perl-cgi environment, and the same code works on 127.0.0.1/cgi-bin, 127.0.0.1/cgi-bin/subfolder/, 127.0.0.1/~username/

I always get "End of script output before headers" when execute script under user's subfolder.
Could anyone helps? Thanks