No Man’s Sky: esploratori di un universo sconosciuto

Cos’è “No Man’s Sky”?
E’ un gioco di esplorazione spaziale in cui vestite i panni di un avventuriero, risvegliatosi dopo un incidente e completamente disperso su uno dei tanti pianeti di un’immensa galassia. A vostra disposizione avete sola la vostra astronave, da riparare, la vostra tuta di sopravvivenza con relativo zaino e un oggetto singolare, chiamato multitool. Quest’ultimo è utile sia per difendervi che per estrarre minerali dalle rocce e dalla vegetazione che incontrerete.
I minerali sono la chiave per costruire qualsiasi cosa in quanto le loro combinazioni vi permetteranno di creare elementi pià complessi e addirittura oggetti. Un po’ alla Minecraft.

no-mans-sky-gameplay-1280x720
Solo una volta rimessa in sesto l’astronave, attività che ovviamente ha la funzione di un mero tutorial, potrete cominciare il vostro viaggio. La destinazione è addirittura il centro della galassia. Il perchè vi siate diretti non vi è dato saperlo ma in fondo non è neanche realmente così importante. Il vero scopo del gioco, fin dall’inizio, sembra essere il viaggio stesso.
Questo è solo un rapido preambolo utile a inquadrare la tipologia di gioco ma, come ho anticipato all’apertura del sito, questa non vuole essere una recensione puntuale ricca di dettagli, bensì la descrizione di un’esperienza.

Ci sono giochi su cui lavorano gruppi di sceneggiatori, team di grafici, dozzine di programmatori ed eserciti di beta tester. NMS non é uno di questi. Eppure, con tutti i limiti di un titolo indipendente dal budget ristretto, questo titolo da fin da subito mi ha dato la sensazione che pochi altri prima erano riusciti a darmi: l’emozione di essere “la dove nessuno era mai giunto prima”.

no_mans_sky_tips_and_tricks_image_1_0

Una galassia infinita. O quasi.
Per chi non lo sapesse nonostante la massiccia campagna pubblicitaria Sony, l’idea originale alla base di NMS é quella di aver creato un universo di pianeti interamente grazie ad un algoritmo procedurale. Per farla breve gli sviluppatori hanno messo insieme una serie di algoritmi matematici e regole fisiche per generare automaticamente un intero universo fatto di stelle, pianeti e asteroidi. I numeri dicono: oltre 18 trilioni di pianeti. Su ogni pianeta potete trovare oceani e continenti con vegetazione e animali anch’essi creati sulla base di algoritmi automatici procedurali e pseudocasuali con la stessa logica fatta di procedure e una certa dose di casualitá.
Perché?
Per creare un universo realmente sconfinato ed inesplorato dove tutto é possibile. In cui ogni luogo da voi scoperto é realmente qualcosa che “nessun uomo ha mai visto prima”…neanche gli sviluppatori stessi.

Ha dei limiti tutto questo?
Certamente si e dopo un po’ alcune ripetizioni le vedrete. Per quanto variagati gli algoritmi ovviamente non sono a livello di quelli naturali che plasmano il nostro universo e una certa quantità di elementi dopo un po’ li riconoscerete. Ma in realtà questo non importa.
La cosa importante é che la tua prima ora su NMS é fantastica. Mi sono sentito un ragazzino che entrava in una puntata di Star Trek chiedendomi: cosa ci sarà su quel pianeta? L’aria sará respirabile? Ghiacciai o clima torrido? Ci saranno animali? E come saranno fatti?
La sensazione c’é e per un attimo ti senti davvero un esploratore.

Poi il gioco va avanti e comincia il viaggio fra le stelle.

nusof-ezgorwal     no-mans-sky-screenshot-9-1280x720

Dove sono adesso?
Sono ormai oltre il trentesimo salto nell’hyperspazio (quindi ho visto altrettanti sistemi stellari anche se non per intero…troppi pianeti!) e la sensazione c’é ancora.
Potrei parlarvi di quel pianeta dalle singolari formazioni rocciose che sembravano immense radici, oppure di quella piccola luna boscosa che orbitava accanto a quel pianeta gigante rosso… questo gioco é fatto di questo: del gusto di esplorare.
É perfetto in questo? Assolutamente no. I limiti degli algoritmi come dicevo dopo un po diventano palesi. Ma non importa perché non mi sono ancora stancato e ogni volta che penso di aver visto tutte le possibili varianti trovo qualcosa di nuovo ed inaspettato.
Ho letto diversi articoli in rete sui limiti di questo titolo rispetto a quanto inizialmente era stato promesso ma l’idea alla sua base rimane comunque unica e una chance la merita. Se vale o meno il suo costo, davanti ad un’idea del genere, per ora è secondo me una cosa davvero soggettiva.

Per il resto vi aggiornerò più avanti.
Ora devo trovare il modo di procurarmi altra antimateria: ho un appuntamento con l’Atlas nel prossimo sistema stellare. Non facciamolo aspettare…

AlwaysOn : Manually add a new replica node using a custom backup path with custom credentials

Management Studio wizards are powerful and can do many things automatically… but sometimes you have to do something manually.
In this case we have to do manually backup and restore steps before running the ‘Add node Wizard’ in Join mode.
This because the add process cannot impersonate a custom user to access a remote backup path.
In this procedure we’ll mount a backup path using custom credentials. Then we’ll perform a backup and restore no-recovery for every databases protected by alwayson on a new replication node.

--EXECUTE THIS ON THE PRIMARY NODE

--CONFIGURE HERE----------------------------------------- 

--SET DESTINATION BACKUP PATH
DECLARE @destpath nvarchar(255) = 'REMOTE_PATHDIR'

--SET ‘YES’ TO RUN NOW STATEMENTS OR ‘NO’ TO GENERATE ONLY SCRIPTS
DECLARE @runnow nvarchar(3) = 'NO'

--SET THE MOUNT command (needs xp_cmdshell active)
DECLARE @mount_cmd as nvarchar(4000) = 'net use REMOTE_PATH /user:myuser mypassword'

--SET THE UNMOUNT command (needs xp_cmdshell active)
DECLARE @dismount_cmd as nvarchar(4000) = 'net use REMOTE_PATH /delete'
---------------------------------------------------------

DECLARE @DbName as VARCHAR(40)
DECLARE @primary as VARCHAR(256)
DECLARE @cmd as nvarchar(4000)
DECLARE @cmd2 as nvarchar(4000)
DECLARE @cmd3 as nvarchar(4000)

DECLARE @ext as char(4)= '.bak'
DECLARE @msgerror as varchar(30)

select @primary = primary_replica  from sys.dm_hadr_availability_group_states

–- list all database alwayson protected
-- Note: in ANY availability group

declare cursor_DatabasesToProtect CURSOR FOR select name from master.sys.sysdatabases  where name in (select db_name(database_id) from sys.dm_hadr_database_replica_states)

IF (@runnow='YES')
BEGIN
 exec xp_cmdshell(@mount_cmd)
END

OPEN cursor_DatabasesToProtect
FETCH NEXT FROM cursor_DatabasesToProtect INTO @DbName

WHILE @@FETCH_STATUS = 0
BEGIN
                                               
	set @cmd= 'BACKUP DATABASE [' +  @DbName + '] TO  DISK = ''' + @destpath +  @DbName + '_FULL'' WITH NOFORMAT, COMPRESSION, NOINIT,  NAME = N''' + @DbName + 
	'-Full Database Backup'', SKIP, NOREWIND, NOUNLOAD,  STATS = 10'
                   
	set @cmd2= 'BACKUP LOG [' +  @DbName + '] TO  DISK = ''' + @destpath +  @DbName + '_LOG1'' WITH NOFORMAT, COMPRESSION, NOINIT,  NAME = N''' + @DbName + 
	'-Log Database Backup'', SKIP, NOREWIND, NOUNLOAD,  STATS = 10'

	set @cmd3= 'BACKUP LOG [' +  @DbName + '] TO  DISK = ''' + @destpath +  @DbName + '_LOG2'' WITH NOFORMAT, COMPRESSION, NOINIT,  NAME = N''' + @DbName + 
	'-Log Database Backup'', SKIP, NOREWIND, NOUNLOAD,  STATS = 10'

	BEGIN TRY
		print '----------------------'
	
IF (@runnow='YES')
	BEGIN
			print @cmd
			exec (@cmd)
			print @cmd2
			exec (@cmd2)
			print @cmd3
			exec (@cmd3)
		END
		ELSE
		BEGIN
			print @cmd
			print @cmd2
			print @cmd3

		END
		print '----------------------'
		
       END TRY
	BEGIN CATCH
		set @msgerror='BACKUP FAILED FOR ' +  @DbName
		print @msgerror
	END CATCH

	FETCH NEXT FROM cursor_DatabasesToProtect INTO @DbName
END

close cursor_DatabasesToProtect
deallocate cursor_DatabasesToProtect

IF (@runnow='YES')
BEGIN
 exec xp_cmdshell(@dismount_cmd)
END

--EXECUTE THIS ON THE NEW SECONDARY NODE

--CONFIGURE HERE----------------------------------------- 

--SET DESTINATION BACKUP PATH
DECLARE @destpath nvarchar(255) = 'REMOTE_PATHDIR'

--SET ‘YES’ TO RUN NOW STATEMENTS OR ‘NO’ TO GENERATE ONLY SCRIPTS
DECLARE @runnow nvarchar(3) = 'NO'

--SET THE MOUNT command (needs xp_cmdshell active)
DECLARE @mount_cmd as nvarchar(4000) = 'net use REMOTE_PATH /user:myuser mypassword'

--SET THE UNMOUNT command (needs xp_cmdshell active)
DECLARE @dismount_cmd as nvarchar(4000) = 'net use REMOTE_PATH /delete'
---------------------------------------------------------

DECLARE @DbName as VARCHAR(40)
DECLARE @primary as VARCHAR(256)
DECLARE @cmd as nvarchar(4000)
DECLARE @cmd2 as nvarchar(4000)
DECLARE @cmd3 as nvarchar(4000)

DECLARE @ext as char(4)= '.bak'
DECLARE @msgerror as varchar(30)

select @primary = primary_replica  from sys.dm_hadr_availability_group_states

–- list all database alwayson protected
-- Note: in ANY availability group

declare cursor_DatabasesToProtect CURSOR FOR select name from master.sys.sysdatabases  where name in (select db_name(database_id) from sys.dm_hadr_database_replica_states)

IF (@runnow='YES')
BEGIN
 exec xp_cmdshell(@mount_cmd)
END

OPEN cursor_DatabasesToProtect
FETCH NEXT FROM cursor_DatabasesToProtect INTO @DbName

WHILE @@FETCH_STATUS = 0
BEGIN
                                               
    set @cmd= 'RESTORE DATABASE [' +  @DbName + '] FROM  DISK = ''' + @destpath +  @DbName + '_FULL'',  STATS = 1, NORECOVERY'
    set @cmd2= 'RESTORE DATABASE [' +  @DbName + '] FROM  DISK = ''' + @destpath +  @DbName + '_LOG1'',  STATS = 1, NORECOVERY'
    set @cmd3= 'RESTORE DATABASE [' +  @DbName + '] FROM  DISK = ''' + @destpath +  @DbName + '_LOG2'',  STATS = 1, NORECOVERY'


	BEGIN TRY
		print '----------------------'
	
IF (@runnow='YES')
	BEGIN
			print @cmd
			exec (@cmd)
			print @cmd2
			exec (@cmd2)
			print @cmd3
			exec (@cmd3)
		END
		ELSE
		BEGIN
			print @cmd
			print @cmd2
			print @cmd3

		END
		print '----------------------'
		
       END TRY
	BEGIN CATCH
		set @msgerror='RESTORE FAILED FOR ' +  @DbName
		print @msgerror
	END CATCH

	FETCH NEXT FROM cursor_DatabasesToProtect INTO @DbName
END

close cursor_DatabasesToProtect
deallocate cursor_DatabasesToProtect

IF (@runnow='YES')
BEGIN
 exec xp_cmdshell(@dismount_cmd)
END

If you see no errors now you are ready to use management studio wizard to add the new node to alwayson replication

MovieMania! v.1.0

UPDATE: v.1.0 is no longer supported and it’s no downlodable anymore.

Go to MovieMania! v.2.0 FREE

Movie Mania is a movie library manager, developed as a personal project. You can use it to catalog your downloaded movies, your DVDs and BlueRays. Choose a folder to scan and see how Movie Mania finds covers and informations about your movies, adding them to your library at the same time.
When is necessary, you can correct covers with the fast cover search: it takes few seconds.
[slideshow_deploy id=’861′]
Find photos and informations about your movies with a click: overview, release date, budget, duration, photos, cast and crew profiles and so on…
Fill your library with your memorable movies… or find informations about in production movies, directly from web databases!!

Movie Mania – Movie Library Manager – for Windows Vista / 7 / 8 / 8.1 / 10 now available!

Main features:
– Automatic and Configurable Folders Scan
– Automatic Movies Classifications
– Poster Informations and Photos
– Amazing Movie Photos from Web
– Cast and Crews informations
– Multiple Skin and colors
– Fast Online Movie Search
– Add Your Memorable Movies to your Library
– Quick Filters and Search Options

Movie Mania is based on Web API provided by http://www.themoviedb.com, free for commercial and private use :
API faq: http://www.themoviedb.org/faq/api
Details: http://www.themoviedb.org/documentation/api)

Recover a database with a DAMAGED and/or LOST log file

In this procedure we’ll manage one of the worst situation a DBA has to manage: corrupted files and data loss. When this heppen usually the common way is restoring but we’ll use sql server features to reduce stop time (avoiding a complete restore) and data loss.

Possible starting problems:
Corrupted logfile
Corrupted logfile during a long transaction
Logfile volume corrupted or lost during transactions

At this point there are different solutions following current database settings:

SCENARIO 1: No transactions running during crash.
Solution:
If no transactions were running at crash point the solution is easy.This because SQL server rebuild automatically lost log file during database startup. So:
1) Detach corrupted database
2) Rename the old corrupted logfile in *.OLD
3) Attach database using:

CREATE DATABASE [MYDATABASE] ON
 ( FILENAME = N'D:Microsoft SQL ServerYourDataPathDataDatabase.mdf' )
 FOR ATTACH_REBUILD_LOG
 GO
 Notes:
 - SQL Server will try to rebuild log file in the ORIGINAL path.

SCENARIO 2: Transactions running during crash
Solution:
ATTACH_REBUILD_LOG in this situation *IS NOT* allowed because SQL Server find open transactions in the database and pending rollback/rollforward operations. So you’ll find the following error trying:

“File activation failure. The physical file name “D:Microsoft SQL ServerYourDataPathDataLogfile.ldf” may be incorrect.

The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.
Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘MYDATABASE’.
CREATE DATABASE is aborted. “

So, follow this procedure:
1) DETACH DATABASE MyDatabase
2) Rename datafile and logfile in MDF.OLD and LDF.OLD
3) Create a new database with THE SAME name and identical original datafile and logfile position. I
4) ALTER DATABASE MyDatabase SET OFFLINE
5) Now you can put the original datafile in the original position
6) ALTER DATABASE MyDatabase SET ONLINE. This will fail but now we’ll can rebuild the log file
7) ALTER DATABASE [MyDatabase ] REBUILD LOG ON (NAME=’MyDatabaseLog’,FILENAME=’D:Microsoft SQL ServerYourDataPathDataLogfile.ldf’)
At this point the database will be usable but SQL Server at the end will show this warning:
Warning: The log for database ‘MyDatabase’ has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.
8) Final Step: open the database to users:
ALTER DATABASE [nomedb] SET MULTI_USER

Notes:
– In recovery model FULL make a new FULL BACKUP as soon as possible because the RESTORE chain is broken and you need a new baseline for log backup.
*Ask to double-check application consistency* because data recovered could be NOT consistent at application level. (we have done an uncomplete recover). If applicaton checks fails and nothing is fixable rapidly at application levele you have to consider, at the end, only a complete restore.

Monitoring Wait Events of a single Session or Query in SQL Server

Using sys.dm_os_wait_stat is not useful for deep troubleshooting because this view contains wait events for ALL processes/queries running on your instance since last restart. Using command “DBCC SQLPERF (‘sys.dm_os_wait_stats’, CLEAR)” we can reset this view with no restart but on big and stressed instances with many processes and applications running this isn’t a real good solution.

SQL 2008 introduce a new powerful and flexible way to collect performance data: Extended Events.
With EE we can collect performance data for a wide range of counters with almost any filter we want.

The following procedure collect and manage asyncronously data of a SINGLE session.
Using this solution you can collect wait events infos of your session while testing or wait events infos about a single user or application.

NOTE: Stats are collected in a fs folder, so watch carefully space used growing while collecting.

--- 1) Drop the monitor session if it exists.
IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'WaitMon')
    DROP EVENT SESSION WaitMon ON SERVER
GO

--- 2) Create the new monitor session.
CREATE EVENT SESSION WaitMon ON SERVER
ADD EVENT sqlos.wait_info
    (WHERE sqlserver.session_id = 79 )              ---INSERT HERE the session id to monitor
ADD TARGET package0.asynchronous_file_target       
    (SET FILENAME = N'S:tempEE_WaitMonStats.xel', ---INSERT HERE the correct path for collected data stats
     METADATAFILE = N'S:tempEE_WaitMonStats.xem')
WITH (max_dispatch_latency = 1 seconds);
GO

--- 3) Start the Monitor session
ALTER EVENT SESSION WaitMon ON SERVER STATE = START;
GO

--- 4) >>>>>>...run your query or wait for data collection from spid session....<<<<<<

--- 5) Stop the Monitor session
ALTER EVENT SESSION WaitMon ON SERVER STATE = STOP;
GO

--- 6) Load collected data in a temp table
CREATE TABLE #RawEventData (
    Rowid  INT IDENTITY PRIMARY KEY,
    event_data XML);
GO

INSERT INTO #RawEventData(event_data)
SELECT
    CAST (event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file
        ('S:tempEE_WaitMonStats*.xel',          ---INSERT HERE the correct path for collected data stats 
         'S:tempEE_WaitMonStats*.xem', null, null);
GO

--- 7) Query data to analize wait events
SELECT
    waits.[Wait Type],
    COUNT (*) AS [Wait Count],
    SUM (waits.[Duration]) AS [Total Wait Time (ms)],
    SUM (waits.[Duration]) - SUM (waits.[Signal Duration]) AS [Total Resource Wait Time (ms)],
    SUM (waits.[Signal Duration]) AS [Total Signal Wait Time (ms)]
FROM
    (SELECT
        event_data.value ('(/event/@timestamp)[1]', 'DATETIME') AS [Time],
        event_data.value ('(/event/data[@name=''wait_type'']/text)[1]', 'VARCHAR(100)') AS [Wait Type],
        event_data.value ('(/event/data[@name=''opcode'']/text)[1]', 'VARCHAR(100)') AS [Op],
        event_data.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT') AS [Duration],
        event_data.value ('(/event/data[@name=''signal_duration'']/value)[1]', 'BIGINT') AS [Signal Duration]
     FROM #RawEventData
    ) AS waits
WHERE waits.[op] = 'End'
GROUP BY waits.[Wait Type]
ORDER BY [Total Wait Time (ms)] DESC;
GO 

--- 8) Cleanup
DROP TABLE #RawEventData;
GO 

How to script logins with the original password HASH and original SID

A quick tip useful to avoid problems during migrations.
This query allow to script instance users with orginal SID and password HASH.

This can be very useful to transfer logins to a new or different instance. All informations come from master database catalog. You have to be member of sysadmin role to run correctly this script. Users with noadmin grant we’ll have no or uncomplete output result.

select   
convert(varchar(50),'CREATE LOGIN [' + name + '] with password='),CAST(password AS VARBINARY(26)),
'HASHED,SID=',
CAST(sid AS VARBINARY(26))
from sys.syslogins  where password is not null

How to improve mysql performance using CACHING

A deep analisys of a performance bottleneck could be long and is based on different kind of data (OS metrics, IO performance tests, DB wait stats and so on…). Before starting with this (or simply after a new delivery) some simple checks could help you to correct rapidly some of the common (and bigger) bottlenecks you can find in a low-performance installation.
One of this is cache sizing.

A cache disabled or not correctly configured is a common reason of low-performance problems.
MySQL engine use a result-caching to improve query speed. A query result is cached in memory to avoid slow IO reading operations. Mysql use the query_cache_size parameter to size the engine cache. A result is saved in the cache only if it’s larger then the query_cache_min_res_unit parameter but smaller then the query_cache_limit.

A query result is kept in memory until:
– the result is valid (data are not modified)
– there are no more-used results using spaces in cache dedicated memory

A good sizing of cache instance parameters is based on instance activity. My suggestion are:
– Start with a simple configuration using system memory available (avoiding paging). note: by default mysql cache is DISABLED
– Start keeping query_cache_min_res_unit little.
– Check istance metrics after some days of normal database activity to understand how you can tune better cache parameters

This is an example of an installed configuration:

mysql> show variables like '%query_cache%';
+------------------------------+----------+
| Variable_name                | Value    |
+------------------------------+----------+
| have_query_cache             | YES      |
| query_cache_limit            | 2097152  |
| query_cache_min_res_unit     | 4096     |
| query_cache_size             | 33554432 |
| query_cache_type             | ON       |
+------------------------------+----------+
5 rows in set (0.00 sec)

After some days of normal instance activity start watching some metrics to understand whats heppening.
Those values are self explained

mysql> show status like 'QCache%';
+-------------------------+----------+
| Variable_name           | Value    |
+-------------------------+----------+
| Qcache_free_blocks      | 957      |
| Qcache_free_memory      | 15523704 |
| Qcache_hits             | 1978478  |
| Qcache_inserts          | 132298   |
| Qcache_lowmem_prunes    | 6037     |
| Qcache_not_cached       | 5969     |
| Qcache_queries_in_cache | 3389     |
| Qcache_total_blocks     | 8396     |
+-------------------------+----------+
8 rows in set (0.00 sec)

Talking about performances there are no “top configuration” usable. You have to work with users and system integrators to understand:
– Any application suggested cache configuration
– In low performance situations: what is the performance target ?
– Are there any test query usable to test configuration tuning?

For a deeper explanation of all mysql cache parameters start from here: http://dev.mysql.com/doc/refman/5.6/en/query-cache.html

Useful Tools from the Web

These are my favourites, used for every day common jobs. This is not a complete review of them (you can find it anywhere…) but just a list with the reason why I choose them.

SQL Script Utility
(http://sqlsrvscrpt.codeplex.com/wikipage?title=DBScript%20Utility)

Management Studio is an incredible management tool…but if you need to script massivly indexes it can not help you. I looked for a tool like Script Utility years ago to support a replica scenario. Sometimes the replica had to be re-initialized so the tables and their indexes (created only on the subscriber) where dropped. I needed a good and fast way to script indexes on the subscriber to re-create them after every replica re-initialize.
This tool (completely FREE from ms codeplex) was a good solution to solve this problem.

 scriptutility

Classic Shell
(http://www.classicshell.net)

After the upgrade to Win 8 of my pc I tried for months to work without the start button. No way!  I can not work without it.
It’s not just a link box: it’s the complete list of evertything I have installed on my pc and a fast shortcut to everything (control panel, computer management, common used apps, last opened docs and so on…). Win 8.1 will re-introduce it but with different purposes (just a rapid link to start screen…).

Classic Shell for me was the final solution. Well done, fast, easy to install, no (or just a little…) configuration needed. A great tool for a simple but important job. And it’s FREE.

Note: If you want you can customize the start button  with custom logos or skins. My suggestion is http://www.enkeria.se/download/ClassicShell/Classic_Shell_Buttons_by_Enkeria.zip . It’s a good download to have a mswin-like logo (from release 4 Ms Win logo is not available any more in the product) and other funny alternatives.

 classicshell

Massive SQL Server Database Moving Using Detach – Attach: The Complete Procedure

This is a complete configurable metascript, prepared to create detach statements, file move statement and re-attach statements for every database in your instance.
You have only to configure the final destination for datafiles and the database list you want to migrate.

The final generated script for every database selected will be something like this (remembar to use ALT-T to use text output mode in Management Studio)

============================================================
  DATABASE: myTestDB
============================================================
 
--  1) DETACH DATABASE
 
USE [master]
GO
ALTER DATABASE [myTestDB] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
EXEC master.dbo.sp_detach_db @dbname = N'myTestDB', @keepfulltextindexfile=N'false'
GO
 
--  2) DATAFILE OS MOVING
 
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_log.LDF" L:LogPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB.mdf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_2.ndf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_3.ndf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_4.ndf" D:DataPath'
exec xp_cmdshell 'move "C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAmyTestDB_5.ndf" D:DataPath'
 
--  3) ATTACH DATABASE
 
CREATE DATABASE [myTestDB] ON 
( FILENAME = N'L:LogPathmyTestDB_log.LDF' ),
( FILENAME = N'D:DataPathmyTestDB.mdf' ),
( FILENAME = N'D:DataPathmyTestDB_2.ndf' ),
( FILENAME = N'D:DataPathmyTestDB_3.ndf' ),
( FILENAME = N'D:DataPathmyTestDB_4.ndf' ),
( FILENAME = N'D:DataPathmyTestDB_5.ndf' ) 
FOR ATTACH
GO

This is the complete metascript:

set nocount on
----------------------------------------------------
--CONFIGURE HERE
----------------------------------------------------
--     DESTINATION PATH
----------------------------------------------------
--data files destination 
    declare @pathdata nvarchar(max) 
    set @pathdata = 'D:DataPath'
--log files destination   
    declare @pathlog nvarchar(max) 
    set @pathlog = 'L:LogPath'
----------------------------------------------------
--     DATABASE LIST
----------------------------------------------------
    create table #dbs (dbname nvarchar(255))
    --insert into #dbs values('db1')
    --insert into #dbs values('db2')
    --insert into #dbs values('db3')
--Note: if you want to move ALL database simply insert all db names in #dbs table using:
Insert into #dbs select name from master..sysdatabases where name not in ('master','model','msdb','tempdb')
---------------------------------------------------

declare @aDB nvarchar(255)
declare @aFile nvarchar(255)
declare @aType nvarchar(255)
declare @file_n integer
declare @count integer
declare @sep nvarchar(1)

declare db_cur CURSOR FOR select * from #dbs
OPEN db_cur
FETCH NEXT FROM db_cur INTO @aDB
WHILE @@FETCH_STATUS = 0
BEGIN
	print '============================================================'
    print '  DATABASE: ' + @aDB
    print '============================================================'
    print ''
	
	print '--  1) DETACH DATABASE'
	print ''
	print 'USE [master]'
	print 'GO'
	print 'ALTER DATABASE [' + @aDB + '] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE'
	print 'GO'
	print 'EXEC master.dbo.sp_detach_db @dbname = N''' + @aDB +''', @keepfulltextindexfile=N''false'''
	print 'GO'
	print ''
	
	print '--  2) DATAFILE OS MOVING'	
	print ''
		
	declare file_cur CURSOR FOR select type_desc,physical_name from sys.master_files where db_name(database_id) = @aDB order by type_desc,physical_name
	OPEN file_cur
		
    FETCH NEXT FROM file_cur INTO @aType,@aFile
    WHILE @@FETCH_STATUS = 0
    BEGIN
		if (@aType='ROWS')
		BEGIN			
			print 'exec xp_cmdshell ''move "' + @aFile + '" ' + @pathdata + ''''
		END
		ELSE
		BEGIN			
			print 'exec xp_cmdshell ''move "' + @aFile + '" ' + @pathlog + ''''
		END
		set @count=@count+1
	FETCH NEXT FROM file_cur INTO @aType,@aFile
	END
	CLOSE file_cur
	DEALLOCATE file_cur	
	print ''
	print '--  3) ATTACH DATABASE'	
	print ''
	--Create ATTACH Statement
	set @count=1
	set @sep=','	
	PRINT 'CREATE DATABASE [' + @aDB + '] ON '
	declare file_cur CURSOR FOR select type_desc,physical_name from sys.master_files where db_name(database_id) = @aDB order by type_desc,physical_name
	select @file_n=count(*) from sys.master_files where db_name(database_id) = @aDB
	OPEN file_cur
    FETCH NEXT FROM file_cur INTO @aType,@aFile
    WHILE @@FETCH_STATUS = 0
    BEGIN
		if (@count=@file_n)
		BEGIN
			set @sep=' '
		END
		
		if (@aType='ROWS')
		BEGIN
			print '( FILENAME = N''' + @pathdata + RIGHT(@aFile, CHARINDEX('', REVERSE(@aFile))-1) +''' )' + @sep
		END
		ELSE
		BEGIN
			print '( FILENAME = N''' + @pathlog + RIGHT(@aFile, CHARINDEX('', REVERSE(@aFile))-1) +''' )' + @sep
		END
		set @count=@count+1
	FETCH NEXT FROM file_cur INTO @aType,@aFile
	END
	CLOSE file_cur
	DEALLOCATE file_cur
	
	print 'FOR ATTACH'
	print 'GO'
	print ''
FETCH NEXT FROM db_cur INTO @aDB
END
CLOSE db_cur
DEALLOCATE db_cur
drop table #dbs