BlackMagic Declink Card Installation

Go To: https://www.blackmagicdesign.com/support/family/capture-and-playback

Choose the Card from Dropdown List – on Top Right

Note: When the DecLink card is changed to another model, the driver remains the same but need to execute the BlackmagicFirmwareUpdater commands (status/update) for the Linux OS to identify it.

$cd Blackmagic_Desktop_Video_Linux_10.9a7/amd64/deb/
$dpkg -i desktopvideo_*.deb
$apt-get install libgl1-mesa-glx
$apt-get -f install 
$BlackmagicFirmwareUpdater status
$BlackmagicFirmwareUpdater update /dev/blackmagic/io0
$BlackmagicFirmwareUpdater update /dev/blackmagic/io4
$init 6
  • Prequisites:
$apt-get install libavformat-dev libswscale-dev libavresample-dev 
$apt-get install pkgconf cmake yasm libtool libx264-dev
  • Install x264:
git clone https://github.com/qupai/x264.git
# cd x264
# ./configure
# make
# make install
  • x265
# git clone git://github.com/videolan/x265
# cd x265/build
# cmake ../source
# make
# make install
  • fdk-aac
# git clone git://git.code.sf.net/p/opencore-amr/fdk-aac
# cd fdk-aac
# autoreconf -if
# ./configure
# make
# make install
  • opus
# git clone git://git.opus-codec.org/opus.git
# cd opus
# autoreconf -if
# ./configure
# make
# make install
  • libav
# git clone git://github.com/libav/libav
# cd libav
# ./configure --enable-gpl --enable-nonfree --enable-libx264 --enable-libx265 --enable-libfdk-aac
# make -j 8 && make install
Download SDK from https://www.blackmagicdesign.com/support/family/capture-and-playback

Build & Install BMDCapture

# git clone git://github.com/lu-zero/bmdtools
# cd bmdtools
# make SDK_PATH=<path where you unpacked the decklink sdk>/<Target OS>/include

Note: The SDK currently supports Linux and MacOSX. Thus the <Target OS can be either Linux or Mac.

Install BMDCapture
$cp bmdcapture bmdplay /usr/local/bincd /root/Blackmagic_Desktop_Video_Linux_10.9a7/deb/amd64/
$dpkg -i mediaexpress_3.5.3a1_amd64.deb
$apt-get install libatk1.0-0
$apt-get -f install
$dpkg -i mediaexpress_3.5.3a1_amd64.deb
Sample Command to Initiate streaming via wowza:
bmdcapture -C 0 -m 2 -A 2 -c 2  -V 4  -F nut -f pipe:1  | avconv -loglevel warning -i - -async 1 -vsync passthrough -flags +global_header -c:v libx264 -pix_fmt yuv420p -preset:v superfast -tune zerolatency -threads 0  -b:v 700k -minrate 700k -maxrate 700k -bufsize 700k   -r 30 -c:a aac -ar 48000   -strict experimental  -profile:v baseline -f flv rtmp://wowza.abc.com/xyz/123

 

Reference Links

https://coolchevy.org.ua/2010/09/08/decklink-driver-of-blackmagicdesign-on-gentoo-linux/

https://www.blackmagicdesign.com/support

https://www.blackmagicdesign.com/support/download/9d53d0685c754e728c46d6dd57841fc0/Linux

https://www.sitola.cz/igrid/index.php/DeckLink_Setup_(Linux)

https://forum.blackmagicdesign.com/viewtopic.php?f=3&t=92

https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=40854

https://github.com/lu-zero/bmdtools/wiki (Steps are included here)

 

Docker Swarm

Create a Manager Node:

$docker-machine create -d amazonec2 --swarm --amazonec2-region ap-southeast-1 --amazonec2-zone a --amazonec2-vpc-id vpc-12112 --amazonec2-ssh-keypath [SSH-PRIV-KEY-FILE] master

Note: Public Key should also be present in the same directory.

Create 2 Slave Nodes:

$docker-machine create -d amazonec2 --swarm --amazonec2-region ap-southeast-1 --amazonec2-zone a --amazonec2-vpc-id vpc-121212 --amazonec2-ssh-keypath [SSH-PRIV-KEY-FILE] slave1
$docker-machine create -d amazonec2 --swarm --amazonec2-region ap-southeast-1 --amazonec2-zone a --amazonec2-vpc-id vpc-121212 --amazonec2-ssh-keypath [SSH-PRIV-KEY-FILE] slave2

To default docker commands to Manager machine:

$docker-machine env master

Login to master and configure it as Manager node:

$docker-machine ssh master

curl http://169.254.169.254/latest/meta-data/public-ipv4

$docker swarm init --advertise-addr [PUBLIC-IP]

To add a worker to this swarm, run the following command:

$docker swarm join \
--token SWMTKN-1-34t11111111111111111021crh0xwoktwxzwb \
PUBLIC-IP:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
Login to Slave and execute the above command to add as a worker.

 


DEPLOY A SERVICE


 

NOTE: To resolve to the container based on the service name you provide, you have to create separate networking with overlay driver and use that driver for service creation(use –publish option to expose the port outside).

$docker network create --driver overlay my_net
$docker service create --replicas 2 --network my_net -p 80:80 --name web --mount type=bind,src=/etc/hostname,dst=/usr/share/nginx/html/index.html,readonly nginx
  • This will resolve the name “web” to a virtual IP inside the container but it will not be accessible outside the containers.
  • This will load Nginx containers in both master and slave.

Log into one of the containers

$apt-get udpate
$apt-get install dns-utils curl net-tools
$nslookup web
$ifconfig

The IPs of nslookup and ifconfig are different, the IP of nslookup is Virtual IP.

This will RR between containers running on the same host only.
Status Check of Service:

$docker service ls
$docker service ps web

To scale up a service:

$docker service scale web=5

To remove the whole setup

$docker service rm web

 

 

SWARM has a built-in load balancer, why another load balancer?

SWARM does not have:

  1. SSL Termination
  2. Content Based routing
  3. Access control and authorization
  4. Rewrites and redirects.
  5. More on Nginx – Advance LB Algorithms, Multiprotocol support, Advanced logging, limits, scripting, security.
    (Native mod-security available for Nginx.)

[BLOG Incomplete]

Segmentation Fault

 

OS extends its physical memory by using virtual memory which is implemented using a technique called paging.
Paging is another form of swapping between HDD and Physical Memory.
If application requests a page in memory and it could find it then a Page Fault occurs.
If the address of the page requested is invalid then Invalid Page Fault occurs which causes the program to abort.

What is segmentation fault?

In operating systems that use virtual memory, every process is given the impression that it is working with large, contiguous sections of memory. In reality, each process’ memory may be dispersed across different areas of physical memory, or may have been paged out to a backup storage (typically the hard disk). When a process requests access to its memory, it is the responsibility of the operating system to map the virtual address provided by the process to the physical address where that memory is stored. The page table is where the operating system stores its mappings of virtual addresses to physical addresses.

The page table lookup may fail for two reasons. The first is if there is no translation available for that address, meaning the memory access to that virtual address is invalid. This will typically occur because of a programming error, and the operating system must take some action to deal with the problem. On modern operating systems, it will send a segmentation fault to the offending program.

The page table lookup may also fail if the page is not resident in physical memory. This will occur if the requested page has been paged out of physical memory to make room for another page. In this case the page is paged to a secondary store located on a medium such as a hard disk drive (this secondary store, or “backing store”, is often called a “swap partition” if it’s a disk partition or a swap file, “swapfile”, or “page file” if it’s a file). When this happens the page needs to be taken from disk and put back into physical memory.

Common causes segmentation fault:

The following are some typical causes of a segmentation fault:

Attempting to execute a program that does not compile correctly. Some compilers will output an executable file despite the presence of compile-time errors.

  •     Dereferencing NULL pointers
  •     Attempting to access memory the program does not have rights to (such as kernel structures in process context)
  •     Attempting to access a nonexistent memory address (outside process’s address space)
  •     Attempting to write read-only memory (such as code segment)
  •     A buffer overflow
  •     Using uninitialized pointers

When a Page Fault occurs, OS does the following:

  •     Determine the location of the data in auxiliary storage.
  •     Obtain an empty page frame in RAM to use as a container for the data.
  •     Load the requested data into the available page frame.
  •     Update the page table to show the new data.
  •     Return control to the program, transparently retrying the instruction that caused the page fault.

 

source : wikipedia and webopedia

Install MySQL plugin for Newrelic in a Minute

To use this plugin, you must have:

  • a Java Runtime Environment (JRE) of 1.6 or higher
  • at least one database to monitor (MySQL 5.0 or higher)
  • a New Relic account

New Relic Platform Installer (NPI) is a simple, lightweight command line tool that helps you easily download, configure and manage New Relic Platform Plugins

Plugin for Generic Linux OS (OS = Opensuse)

LICENSE_KEY=4eeeeeeeeeeeeeeeeeeeeeeee2e bash -c "$(curl -sSL https://download.newrelic.com/npi/release/install-npi-linux-x64.sh)"
npi install com.newrelic.plugins.mysql.instance

Configuration File

#vim ~/newrelic-npi/plugins/com.newrelic.plugins.mysql.instance/newrelic_mysql_plugin-2.0.0/config/plugin.json

 

{
"agents": [
{
"name" : "Host Name on Newrelic UI",
"host" : "localhost/RDS ENpoint",
"metrics" : "status,newrelic",
"user" : "DB_USER_NAME",
"passwd" : "DB_PASSWORD"
}
]
}

Start Plugin:

#cd /root/newrelic-npi/plugins/com.newrelic.plugins.mysql.instance/newrelic_mysql_plugin-2.0.0
#java -Xmx128m -jar plugin.jar

 

 

GitHub = https://github.com/newrelic-platform/newrelic_mysql_java_plugin
Plugin Home Page = https://rpm.newrelic.com/accounts/748441/plugins/directory/52

Run commands in Opsworks Instances using chef recipe

Using Custom Recipes in Opsworks

MY Recipe

Create the follwoing DIR structure:

myCookbookRepo -> myCustomCookbook -> recipe -> myCustomRecipe.rb

The name “recipe” must not be changed, remaining we can give the names we like.

vim myCustomRecipe.rb

execute 'bundle install' do
cwd '/srv/www/testapp/current'
end

Save it.

ZIP the directory myCookbookRepo.zip and upload to S3 Bucket.

In Opsworks, Click “Stack” , Click “Stack Settings” , Click “Edit

Paste the AWS S3 URL for myCookbookRepo.zip and AK , PK as well.

Now Click “Run Command” and Select “Execute Recipes” from the Command drop down list and mention the following in “Recipes to execute” box

cookbook::recipe (eg. myCustomCookbook::myCustomRecipe.rb)

Click “Execute Recipes

DONE!

Reference:
https://docs.getchef.com/resource_execute.html
http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-commands.html

Execute “rake task” in Opsworks

All the Opsworks Instances have GEMS installed in 2 locations
 
  1. System-Wide Location (/usr/local/lib/ruby/gems/2.0.0/gems)
  2. User-Home Location, in Opsworks its Deploy user (/home/deploy/.bundler/galaxylifecms/ruby/2.0.0/gems)

The GEMS listed in Gemfile are installed in the User-Home location by Bundler

If you need to execute a custom ruby script like 

#rake my_custom_script

Chances are high that you would run into GEM dependencies errors even though you had mentioned all the required GEMS in Gemfile. 

To verify if the GEM in error have been installed by bundler or not, 

# grep gem_name Gemfile.lock

IF it exist , then the issue is the custom ruby script is pickingup up the wrong environment ie System-Wide location and not User-Home location. 

Solution :

#bundle exec rake my_custom_script

The “bundle exec” will ensure the custom rake task picks up the GEM used by the Bundler environment.